23rd June 2016

Implications of Gigabit Internet to the Home

I recently got gigabit Internet at home (the CenturyLink variant was the only choice in my neighborhood). Overall its going well so far although I can easily peg my router (ASUS RT-AC66U) to the point where it can’t route traffic although there is plenty of remaining bandwidth. The router is just a couple of years old and supports 802.11AC on both bands, but has a fairly puny CPU which just can’t keep up. I’ll post a bit more on my next router in a couple of weeks.

In the meantime I’ve been thinking a bit about the implications of having gigabit to the home. In retrospect this is all pretty “duh” but my home computer has a better connection to various cloud data centers than it does to the spinning disk enclosed inside its own box. I can do 1000Mbps or about 125MBps with ~8-12ms ping times to some of the commercial cloud data centers. While a 7200RPM spinning disk appears to be able to do around 200MBps (sequential reads) with about an 8ms latency. Ok, so the spinning disk wins by a little bit in this best-case scenario for it (it is down to ~75MBps for random reads and if you do 4k reads instead of 2MB reads it shrinks to ~0.3MBps).

But still, that is seriously close. My workstations typically have a smaller SSD for performance sensitive stuff (ok, I typically have 2 now, one for the boot drive and another for source code), and a bigger spinning disk for big things like videos, photos, games, etc. You could seriously remote that later application over the Internet and not really notice the performance difference.

When I ordered my connection the sales person said that most people in my neighborhood were not opting for the full gigabit and that she frankly told anyone who didn’t specifically ask for it that they don’t need it. Which does seem like the right advice given where the software and cloud services are today. But I suspect that is no longer true in just a couple of years. Which is probably why Google is building out their own gigabit Internet around the country (with the added bonus that for their own customers they can make sure the latency is right when connecting to their own data-centers).

posted in Networking, Technology | 0 Comments

18th June 2016

The Verge On Creating Mixed Reality Videos

The Verge posted a nice video about how to create mixed reality videos with the HTC Vive/SteamVR.

The one big that is sort of missing is more explanation about how you align the real world camera with the virtual one that has the third controller attached. Basically you need to specify in a config file the transform (offset and rotation) from the position of the controller attached to the real camera to the ideal focus point on the camera itself. You also need to specify that cameras “FOV” in a way that makes sense to a game engine.

At a basic level, we just created a test app that drew the virtual controllers and let you type keys to adjust the configuration in real time until they matched. I think I used X,Y,Z for the translation and U,R,F for the rotation axis. Put the real camera on a fixed tripod and adjust until it all looks right. The problem is there are a lot of degrees of freedom and if something like the FOV isn’t set just right you might get things looking just right in one place but they can be significantly off in the distance or at the edges of the view. So you keep moving the controllers and readjust until you lock it in.

Or of course if you have more than two controllers you can do it more easily. Since we had plenty of controllers around Valve we were able to track 8 of them at the same time (plus the HMD and the extra controller on the camera of course!) So we distributed them around and used alignment points on the real camera with alignment lines we drew in the test app to line them all up.

This photo shows the actual configuration of the cameras during calibration. Notice that we lined up several of the controllers on the specific alignment points, some in the foreground and some further out.
Mixed reality camera calibration view

And this photo shows me getting things roughly in position in the test app with the controllers just in a circle. You can see the real camera view on the back monitor and the virtual one showing the exact same controllers right in front of me.
Calibrating mixed reality camera

Hopefully soon more precise ways to do this will be available. Ideally you could have fiducial markers that software can just read to get the exact alignment of the camera. Also you could measure the distortion of the real-world camera and adjust for that. For example we noticed that our alignment was slightly off near the outsides of the frame and you could adjust for that when rending the mixed reality view of the game. In practice this latter effect was pretty small and since typically not a lot of action is happening at the edges people won’t notice much.

posted in Mixed Reality, Technology, VR | 0 Comments

20th May 2016

PC Build Update

Last week I built a new PC! My old home workstation was using a Core i7-2700k and it just wasn’t cutting it for some of the VR titles. On the one hand the old CPU was 4.5 years old and that should have been plenty of lifespan. By the old Moore’s law the new CPUs should be 8x faster! Unfortunately we aren’t still on that same curve. From the straight specs the CPU only progressed from 3.5ghz to 4.0ghz (with a max turbo of 4.2ghz) but overall depending on which benchmark you read you get an overall 27-45% boost in performance and probably a bit more headroom if you count overclocking.

Before I got into the details of my new workstation I wanted to mention that whenever I give advice on a PC build its always important to ask the person what they plan on using the PC for. If you don’t have any real needs for graphics you can actually do great with some of the tiny PCs- for example the Intel NUC PCs are the size of a Mac Mini, cost ~$400 and can contain a Skylake (up to the latest generation core-i7) CPU.

Even if graphics/gaming is a big part of the use there is a pretty big spectrum of capabilities. For 90% of people building a nice small Mini-ITX machine with the new NVidia GTX 1070 is probably perfect. On the other hand if you want to drive games on a 4k monitor or do really high quality VR it might even make sense to prep for a system with SLI (dual graphics cards). You can have plenty of gorgeous VR experiences on a single GPU like last generations GTX 970 and indeed that is the target most games are aiming at now. But missing frames SUCKS. And even more VR can really be sensitive to aliasing and related issues. There are a lot of factors, but a good chunk of the root is just that can you move your head with such precision (and you can’t keep it still) so many problems that traditional video games avoid because they control the camera show up. So this is just a long way to say that while I love tiny SFF machines I designed my new workstation to support SLI and hope to at some point equip it with dual-GTX 1080s.

Having committed to supporting SLI at some point that pushed me to a mATX motherboard instead of an ITX one. In some ways its kind of a shame- I’m not aware of any technical reason why someone couldn’t build a daughter-card for an ITX motherboard that had dual GPU slots and let you position them in some sensible way in the case. But I’ve searched and that thing doesn’t appear to exist.

The other issue that SLI raises is CPU family & chipset (this gets a bit complicated). The basic dilemma is that the Skylake processors like the 6700K use the Z170 chipset and the LGA 1151 socket which provides 26 I/O lanes. These are the high speed communication between the CPU and other devices. Modern GPUs all go in PCIe slots that have 16 lanes. Of course you can do the math pretty easily and tell that if there are two GPUs and only 26 I/O lanes they aren’t each going to be able to use all 16. On the other hand the X99 which supports the LGA2011-v3 socket supports 40 lanes and so you can have dual GPUs each with 16 lanes on those systems. Unfortunately the CPUs for those sockets lag the mainstream ones by a bit- the new CPUs for the LGA2011-v3 socket called “Broadwell-E” should be releasing in a few weeks. There will be options that support 6, 8 and even 10 cores although those get incredibly expensive. Unfortunately as PC consumers we are left with trade-offs. Broadwell-E is a generation older and has lower frequencies (the 6-core one will run at 3.6ghz) which means less single-threaded performance. Of course they can make up for it by having a bunch more cores, but many graphics applications are ironically poorly threaded on the CPU-side. And they have those extra lanes so when they are running dual-GPUs they each get the full set of I/O.

But do the full 16 channels really make a difference? It took a bit of searching but I found some benchmarks from Puget Sound Systems and from TechPowerUp. To skip to the conclusions the difference between running a GTX 980 at 8 lanes vs 16 is tiny. Dropping down to only 4 lanes or all the way down to PCIe 1.1 is measurable but the difference between x8 and x16 seems within the error-bars (PCIe 2.0 vs 3.0 also didn’t appear to make much difference). The one caveat I’ll mention is that this is looking at FPS during the execution of a game. Ideally a well-written game shouldn’t be putting much load on the PCIe bus during runtime because hopefully all the big textures and such are already on the GPU. I do wonder if this would have a measurable impact on level-load time and/or games that are seamless when they have to load new content. With a PCIe 3.0 8x GPU having 4GB/s bandwidth it could take up to two seconds to swap out the entire 8GB of VRAM on these new cards. But another way to put it is that is 44MB per frame at 90fps so hopefully a game can stage loading those assets without glitching.

The other reason I wanted to go with the Skylake processor / Z170 chipset was I really wanted to get a PCIe M.2 drive for my boot drive and a USB-C port. Some of the X99 motherboards support these but as they are slightly older I was concerned the support wouldn’t be as good. As it was I had to upgrade the BIOS on the motherboard I got for the M.2 to work correctly.

One other note on the build- I really wish I could have built this using the Cerberus case from Kimera Industries. It looks like an amazing design of a small case (smaller than most ITX cases) that can still fit a mATX motherboard. Unfortunately their kickstarter didn’t get funded enough and they are still trying to figure out how to get production going. I ended up going with the Corsair Carbide Air 240 case which I have to say I’m very happy with so far. Good construction and all the panels came off which made it much easier to install components than any case I’ve ever worked with before. I also like that it keeps the power supply and drives in a separated section from the motherboard- I’m sure that helps with cooling a bunch but its even better for cable management. Of course its pretty big compared to the Cerberus (70% bigger!) but its not actually much bigger than the ITX case I had before.

So with all those details out of the way here is my current build-

Case- Corsair Carbide Air 240
CPU- Intel Core i7-6700K (4.0ghz LGA 1151 91W)
CPU cooler- Corsair Hydro Series H50 1200mm Quiet edition (get the CPU cooled by a nice quiet 120mm external fan rather than the usual coolers that sit inside the case heat)
Motherboard- ASUS ROG Maximus VIII GENE Z170 mATX
RAM- G.SKILL Ripjaws V 32GB (2x16GB – I have 4 slots so I can upgrade to 64GB later if I want). DDR4 3200 CAS14. Its not clear if fast RAM is really worth it at all but if you do look for fast RAM don’t ignore the CAS spec.
Boot drive- Samsung 950 Pro M.2 512GB (really wish the 1tb version was out but it isn’t yet).
Development drive- Crucial 1TB M500 (transferred from my old PC)
Power Supply- EVGA 850W 80+ Gold fully modular. I like fully modular for cable management and this has enough juice for SLI. Otherwise I probably would have gone with 600-650W.
TBD- NVidia GTX 1080 (possibly x2)

I also threw a pair of spinning-rust drives in there. I had been using a few external 8TB “backup” drives as normal drives for a bit but decided for the storage I’m using for all my photos, videos, and random other stuff its probably better to use drives with a somewhat higher spec. I went with Seagate Enterprise NAS HDD 8TB 7200RPM drives that I got on Amazon. The 8TB “backup” drive is quite a bit cheaper and to be honest has worked great so far. But having had an opportunity to hear a talk from a guy working for one of the drive companies I suspect some of the basic components upgrades to invisible things like the rotors, bearings, actuators and such should help with reliability.

I’ll try to post again soon with my more normal build” recommendations. Also in a couple of weeks when Broadwell-E really comes out we can start dreaming about that 8-core system…

posted in Gaming, Graphics, Hardware, Technology, VR | 0 Comments

17th May 2016

Bill Gates interviews Neal Stephenson in 360 video

Bill Gates just posted a review of Neal Stephenson’s book Seveneves. He also includes a 360 degree view interview of Neal- I’m pretty sure he is trying to mimic “comedians in cars getting coffee” except its “geeks in cars getting burgers”. They drive by various things like two guys just happen to be sword fighting in a park next to the road. :) I assume the Tesla is Neil’s since the license is “7eves”.

I was able to watch this just fine on the Vive using this experimental Chromium build.

Two noticeable things about the video in VR. First of all the low framerate (I assume 30fps) of the video becomes really noticeable. You just don’t really see the difference much on a normal display but something about it in VR makes it look really bad. I assume they are updating the head tracking at 90fps but if they aren’t doing that it could explain why its so bad. I noticed the same thing in the new Disney Movies VR app. Getting higher frame rate video seems to be one of the big challenges for this stuff to be successful- the frame rate seems like a bigger issue than the resolution yet none of the cameras I’ve seen have focused on that at all (typically maxing out at 60 fps which can be even worse because you go 2:3 frames since the HMD displays at 90hz).

Second, its kind of a pain to swivel around since the direction that things are happening keeps changing. The notion of “immersive video” is a neat one, but I’m not sure you actually want action in 360 degrees. It is much more comfortable to keep the action in 180 (or a bit less) degrees but just take advantage of that full FOV and ability to attract the viewers attention in their peripheral vision. So many early “VR videos” that I’ve seen really try to take advantage of the full 360 degrees and it seems like a mistake (similar to how early 3d movies overdid the effect of sticking something in the viewer’s face).

posted in Technology, VR | 0 Comments

28th April 2016

Mixed Reality Video

One of the really fun projects I got to work on over the past few months was creating a mixed reality video to show off SteamVR and the HTC Vive. With virtual reality we had this big problem that it can be really difficult to convey what its really like to be in it. So far mostly the best we have done to deal with this is just spend lots of time doing demos. Now that products are out in public the number of people who can experience it in person have grown a ton but its still only a limited set of people that will actually get to try out the hardware for 30 minutes.

Meanwhile there are a ton of videos that capture the first person view that the player sees inside VR or else footage of the person standing in a room moving around (or maybe a split-screen with both at the same time). These can be a good start but the first person view swings around all over as the person looks around and doesn’t feel especially real.

A mixed reality video on the other hand can provide a much better representation of what its like to be a real person “inside” one of these environments. An external camera shoots the person playing in a green-screen environment and they are composed into the views generated by the game (but the game generates a view from the perspective of the camera, not the person’s head). Camera movement helps a ton too because it helps you feel the real dimensionality of the space the person is in as opposed to the person just moving around on a 2d background.

Check it out here-

One of the best things about this was working with the awesome Valve film team and getting a chance to experience a professional video shoot and the great ideas they had about what would make a compelling video. The goal from the beginning was to make something that felt really real and didn’t come across as the usual marketing gimmick video. So a ton of the planning around the shoot involved bringing in normal people and really just trying to capture their actual reactions experiencing VR for the first time and playing the games. The funny thing is that many of the best reactions didn’t make the final video because they were so over the top that they looked fake. They were actually totally real but its one of those things where people would never have believed it just watching a 5 minute video. Oh well.

posted in Mixed Reality, VR | 0 Comments

4th April 2016

HTC Vive Launches Tomorrow

Once again one of those “I haven’t posted on my blog in a long time” posts. I guess I’ve been busy. Tomorrow is a really big day- the HTC Vive launches! SteamVR (our bit of it) has already been in use by thousands of people so in some aspects its a bit less of a big moment (and some consumer deliveries already happened today). But tomorrow the press embargo lifts and all the titles launch (+ a little extra thing) so its a really big deal.

Those titles are a key part of it. One of the most exciting bits for me over the past year+ has been working with tons of developers on their stuff and getting a chance to see a bunch of these games (and other) early.

One of the interesting phenomina in the VR world is there is lots of “what is VR?” discussion. The classic one is panoramic 360 photos/video. I don’t feel a need to include/exclude things, but there has been an interesting discussion about whether these really count (since they don’t really work right as you move your head, etc).

But beyond that if you look at the bigger VR landscape there are a bunch of titles that are things that could basically be on a flat 2d monitor but just displayed in 3d. Those are fun and the 3d makes them feel fresh.

But some of the new stuff coming out tomorrow is really a different level. Room-scale and even more importantly motion controllers really transform this stuff. There are titles that are brand new things that aren’t like anything you have ever seen before. But then there is also stuff like Space Pirate Trainer. The thing about that one is that in some ways the game is like Galaga. A 35 year old game where there are space ships flying around above you in patterns and you have to shoot them. But Space Pirate Trainer transforms this into an entirely new thing- you point with your hands, you dodge, you have to look out for things coming from both sides, you get to block stuff with a sheild. Its not slightly fresh, its totally fresh and I think is a perfect way to reveal how much computing will be transformed by VR (and AR and in-between).

posted in Gaming, Technology, VR | 1 Comment

18th September 2015

VR Impact on Development

Another interesting aspect of VR is how it impacts the development process and how you structure your workplace when building VR experiences. Valve’s normal “cabal” structure is pretty well documented elsewhere but we have had to adapt how we organize our desks for VR. It certainly helps that all of our desks are on wheels so we could just move things around to adapt.

The key thing for us is that its important for every developer to have a quick ability to stand up from their desk and try out the code they are writing. I should say that depending on the thing you are working on probably 50% of the time or more we just end up putting on a headset at our desks and trying things out there. But when working on things that are more about interactions its important to be able to stand up and move around a bit. We have structured areas so that we have a bunch (4-8) desks that surround a common area that can be used for this testing. So all the desks are around the outside and when you need to try something out you just stand up and use the space.

Depending on the group these spaces are different sizes- some of the people working on things that need more moving around have spaces that more closely approximate the maximum tracked area, but I rarely need that and am sitting with a few others in a smaller space. Each area has its own pair of base stations that we can all share but we each have our own HMD which I realize is a luxury that most developers don’t have yet (balanced by the fact that we have to try out new builds of stuff all the time to make sure it works before it gets to others). Right now because base stations can interfere with each other each area is curtained off from other areas. With the final product this shouldn’t be necessary since the sensors will be able to reject incorrect signals.

Of course with shared spaces its important to share- we can’t all be testing and bumping into each other at the same time. Also there is an important rule that the person with the HMD on their face has right of way. If you are moving through the area its your responsibility to avoid them since they can’t see you (for now).

posted in Developers, Technology, VR | 0 Comments

25th August 2015

VR Performance and CPUs- Cores vs Frequency

I feel like I’ll probably write a bunch about performance and VR as we go along since its such a complex topic and there is so much stuff we still need to learn. PC and gaming performance has always been an interesting topic but whereas in the past it was mostly about “can I average 50fps or 55fps on this game?” now it becomes “can I consistently get 90fps 99.9% of the time so I don’t get sick”. 100fps doesn’t matter, 92fps doesn’t matter, but 89fps just sucks. 90fps becomes a really sharp performance line.

So far we only have really preliminary performance data since it depends so much on the content and all of that is in-progress and hasn’t really been optimized. People also haven’t done the work to have their experiences scale up and down depending on the PC they are running on. For example with GPU performance you can do a lot by changing the size of the render target which is normally 1.4x the size of the real physical panels in each dimension (so a total of 1.96 times the pixesl). Having at least 1.4x scale gives you ideal anti-aliasing and sharpness when the panel distortion is applied but if you don’t have enough GPU performance you could reduce that at the expense of some sharpness and graphical quality. Not ideal but it sure beats missing frames.

If CPU becomes the limiting factor your choices are a little more complicated. For one games are traditionally poorly multi-threaded and thus don’t take good advantage of additional cores which is the main CPU performance improvement lately. When dual-core CPUs first came out most games didn’t take advantage of them at all and while the situation has improved quite a bit recently few games seem to take full advantage of 4-core CPUs never-mind some of the 6-core machines that work great as development boxes (compilers LOVE 6-cores now!)

One of the issues in picking the ideal CPU is that CPUs are typically limited by their power consumption / heat generated. So a 6-core CPU is usually limited to much lower frequencies than a 4-core CPU. While it can have overall much more computing resources if the given task isn’t able to keep at least 6 threads busy it will become gated on the performance of a single core which is higher in the chip with fewer cores. So I suspect for many gaming scenarios the high end 4-core CPUs will be better for now than the otherwise better 6-core ones until games can make better use of threads. I’m looking forward to trying out the new Intel Core i7-6700k which is the new 4-core 4.0ghz Skylake CPU that should hopefully be the fastest yet for these scenarios.

posted in Graphics, Performance, Technology, VR | 0 Comments

18th August 2015

VR API support in Games

Various recent discussions have been somewhat confusing about what it takes to add VR support to a game, and specifically what it takes to support multiple VR devices. As usual there are several layers to the issue so some degree of confusion is pretty understandable.

At its most simple level VR support is provided by two key aspects-
1) rendering a stereo view of the scene into two textures (or one texture with one eye on one side and the other on the other side) and handing that off to the VR api. At that point the VR API does a bunch of complex stuff including some corrections, distortion for its optics (which can be calibrated for individual devices) and composting additional system UI. But none of that latter stuff is something the app developers have to worry about.
2) Updating the head position as the HMD is tracked. The underlying VR API should provide the game with a “pose” that represents the exact location and orientation of the head so that as you move your head around in the real world your view will exactly correspond.

At the above level it should be very easy for VR game authors to target multiple vendors headsets. If you are using Unity or Unreal Engine, support for both the Oculus and OpenVR SDKs are built-in / available with the addition of a simple plug-in. Better yet OpenVR is designed with a driver model for different hardware and our goal is to support any major HMD if you use it (although we are limited in our ability to promise specific support because vendors can change their APIs in breaking ways at any time and then we need to update).

Beyond the basics of the headset and scene rendering are other issues around the interactions that the game supports. If your game doesn’t use controllers and is a basic seated or simple standing experience, it should pretty easily work with any headset. In a similar fashion if you use non-tracked controllers like the Xbox or Steam controllers, anyone can buy one of those and use it with any headset. Tracked controllers become a more complicated problem- they each have some more specific behaviors and are tied to the specific tracking system. So in the short run, it would be hard to mix and match the controllers and if a specific game expects the HTC Vive trackpad input as part of its interaction, or another game is designed around the Oculus Touch specific features those might be harder to support across both platforms.

Our goal with SteamVR/OpenVR all along has been to develop technology and make it available to others in the VR community so that we can have high quality VR that doesn’t make people sick and where our customers can buy any VR equipment and enjoy their games on it. We hope Lighthouse will become a standard for tracking so that you can use controllers from many vendors (in many styles) to interact with all your games, but the OpenVR APIs can also work with non-lighthouse tracked controllers. Hopefully the VR industry can avoid following the early days of the 3d-accelerator market where games were created that only worked with one vendors hardware.

posted in Technology, VR | 0 Comments

11th August 2015

Cloudhead on Locomotion in VR

Cloudhead Games has some great ideas about locomotion in VR in this video.

This video brings up a couple of interesting topics worth discussing a bit more.
The first is about some details of the chaperone system. The Cloudhead guys discuss how they are showing some hints in game about the bounds of your real world space and I thought it would be interesting to discuss that in the context of the overall system.

Chaperone exists first and foremost to help people feel comfortable about moving around in room scale VR without having to worry about hurting yourself by running into something in the real world. One interesting aspect of this is that the chaperone system can really help the feeling of presence because once you get used to it you feel much more free to move around in your space. We do a ton of play testing and demos and pretty commonly see the first couple of minutes people move around with a lot of hesitation but once they get comfortable they let the experience they are in take over.

Within the chaperone system we think of two boundaries. The hard-bounds are the real limits of the physical space. If you are in an empty room, that would be the walls. If the room isn’t empty it might be some artificial line that represents the furniture. But the point is hard-bounds are real “do not pass” line. Hard-bounds are implemented in the OpenVR system and when the player gets near them we will draw the shape of the room over anything the current experience is drawing right in the compositor. This is actually somewhat intentionally presence breaking. If you are about to collide with something we want to bump you out of the immersive experience right away.

We also tried experimenting with using just a partial wall in front of you for the hard-bounds to tone it down a bit. I didn’t like that approach much because once I was getting into those limits I really wanted to see the shape of the whole room to help reorient myself.

Soft-bounds however are another matter. Soft-bounds are another set of lines inset a bit from the hard-bounds and they really represent the space you should be inside most of the time when playing. When you are standing inside the soft-bounds, you shouldn’t be able to easily wack things outside the hard-bounds. Soft-bounds aren’t drawn by the system because we don’t have a good way to do it automatically and keep you immersed in the experience you are in. However each experience can ideally represent the soft-bounds in some way that is consistent with its own look and that doesn’t really disrupt your overall presence. One great example of this is TheBlu from WEVR. You are standing underwater on the deck of a ship and its safe to walk anywhere up to the railing in one direction and up to a bunch of junk and wires in the other direction. If your play-space is smaller they could pretty easily move the junk and wires so that the clear area of the deck still represents the safe space. Other games might need to use things like subtle glowing lines on the floor (as seen in the Cloudhead video).

So the key is- use subtle cues to give the player hints about the right place to play. And when they move outside those limits give them less subtle warnings to keep them safe and help reorient themselves in the space.

The Cloudhead video also shows some great ideas about dealing with virtual walls. Obviously in VR you can’t prevent the user from moving the camera into some object just because its supposed to be solid in the virtual world. Remember that the camera IS their head, so if they move their head you have to move the camera. But it doesn’t mean you can’t do something about it. Now, sometimes its pretty fun to just let people put their heads inside your geometry and see the cool insides (people do this all the time with the robot in our Aperture demo and its a lot of fun). But if you don’t want to allow that you can either blur or fade the world when they put their head into the object. I especially like the Cloudhead solution of teleporting you back to some safe place if you stay in there for a while- the problem with just fading or bluring is it can become disorienting to not have that visual reference so it can be uncomfortable or actually difficult to find the correct direction to back out.

Finally I have head some reactions to the video of people expecting that the teleportation for locomotion will diminish presence. I’m sure this varies a bunch from person to person but for me at least it ends up very natural. Its hard to describe but after using it for hours I end up actually teleporting around fairly quickly and my brain doesn’t even think about it that much anymore. In contrast when I try a VR title that moves me around by gliding it feels very unnatural- something about the smooth motion seems wrong (and then I get sick and things feel even more wrong.. )

It all goes back to part of what is so exciting about working on VR right now- no one really knows the right way to do all this stuff. The space is wide open for people to come up with new ideas and test them to see what really works.

posted in Technology, VR | 0 Comments