Insights into what's coming in the world of VR from Oculus Connect
The thing about virtual reality is that it's hard to describe to people who have never experienced it. Imagine something awesome that you've never experienced, and is wholly subjective. It's similar to trying to describe the effects of drugs or alcohol to people who have never tried them. Sure, you can describe the effects in a technical and medical manner: dizziness, possible nausea, euphoria—but you can't tell them about what life's like after two and a half whiskey sours. That part is subjective. In much the same way, you really have to experience VR to get it. That experience is up to the wearer.
My first VR experience was with the humble little Google Cardboard, just a little over a year ago. I was surprised that a simple Android phone could create an immersive experience that vivid. When I tried the Oculus Rift and HTC Vive, both products blew my hair back in terms of image quality. The Samsung Gear VR, while really not much more than a really, really fancy Cardboard headset, offers better interface controls and optics than Google's corrugated paper solution. That's not to say it's not breathtaking, but the Gear VR is notably less awesome than its bigger brother, the Rift. However, at $100, the Gear VR will offer a very good VR experience with a relatively low barrier to entry (assuming you've got a Samsung Galaxy S6).
The Samsung Gear VR offers a surprisingly good VR experience at a fraction of the cost of the Oculus Rift.
When I drove up to the Loews Hollywood Hotel in Los Angeles, I didn't quite know what to expect from Oculus Connect. After all, what could be announced that would be groundbreaking? The Rift release window? We already know it'll be Q1 2016. The Oculus Touch controllers? Q2. There were a few partnerships to be announced, as well as the pricing for the Gear VR ($100) and the release date (November).
What this conference was really about was content. When we're talking content, we're not just talking about games, though games are the easy low-hanging fruit that you'd expect in VR. The fact is, while gamers may rush to pick up VR headsets, the big money sits outside of gaming. John Carmack—Oculus' chief technology officer and the guy who birthed the first-person shooter when he wrote the code for Wolfenstein 3D and Doom—gave a dense, stream-of-thought keynote at Connect that had nerds everywhere listening. When even Carmack is talking about content and video, not just game engines and lighting polygons, you know something is up.
John Carmack, Oculus's chief technology officer.
The big money—and this is where Oculus is apparently trying to makes its mark—is in the kind of content my mom would consume. That means Netflix and other things that won't necessarily require a whole lot of iteration. Oculus's deal with Netflix (along with the Minecraft deal with Majang) was the biggest business news at Connect, by far. Combined with Twitch, Facebook just got distribution deals with some of the biggest video content players on the Internet.
While the deals with Twitch and Netflix are huge, those are just the tip of the iceberg when it comes to non-game content. The two big streaming services, as awesome as they are, still focus on delivering 2D video, within a virtual living room. To some, that may seem like a bit of VR hubris gone too far: Most people can already watch Netflix in their living room. That doesn't mean that Facebook isn't banking on VR being a big thing.
VR Streaming
One of the talks in the first round of sessions at the TCL Chinese Theater was all about streaming VR video. With a hundred or more developers and content makers packed into the movie theater for the talk, it was readily apparent that there are plenty of people who could be working on creating content to stream to VR users. VR streaming seems easy enough on its face, but in reality, streaming VR presents a set of challenges that regular video streaming just doesn't have to deal with.
David Pio, a video streaming engineer at Facebook gave the talk, and explained the methodology Facebook was using for 360-degree VR video streaming.
First off, it really helps to imagine what a VR video would look like as a geometric object. From the viewer's perspective, the VR experience should be a sphere, since you can look in any conceivable direction. But since when did cameras capture video in spheres?
They don't.
Instead, software has to stitch together all of these rectangles into a cube, and do a little logical magic to make the video appear as spherical as possible. But that cube has way too much data for the average 5Mb/s Wi-Fi connection, as Facebook put it. To get down to 5Mb/s, Facebook has to compress that video and discard most of that cube.
Pio said Facebook first approached it by lowering the quality of the video out of the user's field of vision and using blurs in the peripheral field of view of the user. They also had to rethink how they buffer video: Instead of buffering 10 or more seconds of video like YouTube does, they buffer one second. That second is looking in one direction, with the other directions reduced in quality. If the headset moves, the video is still visible, but either blurry or at noticeably lower quality. Pio said that this blurry or low-quality video was preferable to having a blank screen when you move your head rapidly.
Luckily, as the headset moves, the movement data is sent back up to the server, which then sends down another second of video, this time looking in the new direction. With the constant polling of headset direction, buffering more than a second of video becomes a monumental waste of time and resources.
Even with the polling and reduced quality cube, there's still too much data flowing down the pipe. Facebook chose to address this by reducing the cube to a pyramid, with the base as the primary, in-focus viewing area. Using a pyramidal shape rids the compression of having to do anything with the "back" wall that the user can't see, and allows some more aggressive compression of the other walls as they are reduced to triangles.
The VR streaming video pyramid.
When the server calculates this pyramid, it unfolds the pyramid and streams the video as a rectangle. The decoder on the client side then re-folds this video into the pyramid and outputs the video to the VR headset. A single frame in the second that is streamed contains the directional data and tells the client how the pyramid was unfolded. The geometry and video polygon location within the streamed frames can change based on head movement, and compression efficiency for a given region of video.
Voilà . VR streaming video that's much closer to the 5Mb/s threshold.
While logically reducing to other geometric shapes—like a cone—has been considered, Pio said that the pyramid had yielded the best performance and overall quality.
While the technical parts of VR video streaming is interesting itself, the implications are numerous. By making VR video "cheap" enough—in terms of resources and bandwidth—it opens the doors for mass adoption of the medium. Imagine live, streaming video where you can be court-side—right next to Jack Nicholson—at the Lakers game, from a hotel room in Boston. Imagine diving on the Great Barrier Reef with researchers from your living room. Those are the types of experiences that become possible with this technology.
The New Cinema
While Oculus Connect is primarily a developer's convention, there were plenty of content creators and creative types in attendance, too. And for the second talk on the first day, people crammed into the theater where Facebook's David Pio had just given a talk on streaming to see Rob Bredow talk about how Lucasfilm and Industrial Light & Magic are using VR to design experiences as well as plan shots.
I think most of the people came to the talk because, you know, Star Wars. That wasn't far off, since Bredow showed lots of video of how Lucasfilm and ILM was using VR technology to tell stories within Lucas's far, far away galaxy.
One of the first tools that Bredow showed off was the use of VR to scout locations, using an in-house tool called V-Scout. The scouting tool allows directors to plop down digital assets on a landscape and move around within it to find the best angles and determine movement of action. The tool, he said, can mimic various Panavision lenses, to give the user an idea of how the scene will actually look when shot. Scouting locations is an expensive part of filmmaking, and being able to do it remotely using topographical data and VR imagery could cut costs for film production, Bredow said.
The other really impressive tool that ILM showed off was the live rendering process they developed. The live rendering can capture human actors in a motion capture suit, and plop those actions into a digital scene. This isn't too unlike what games already do: dynamically render 3D scenes as they are played out. But Bredow was quick to point out that this tech was quite different, and didn't focus on viewer interaction.
In one demonstration, Bredow showed a scene that had been scripted using this technology. A squad of stormtroopers patrol a desert village, looking for Rebel droids, of course. R2-D2 and C-3PO emerge from a shadowy house, requesting a pickup from a Rebel ship captain. In the distance, a ship holds off an approaching AT-AT. As the droids turn to find an alternate route, they are confronted by none other than Boba Fett, and the scene ends.
While the video looked good and near film-quality as it was, Bredow backed up to show what made this so cool: With VR, you can see any part of the scene you like, and aren't tied to what the "camera" shows you. After he restarts the demo, he pauses the "video" and moves the camera perspective around different sides of the stormtroopers. From there, he looks around to the house where the droids are hiding. After moving the perspective to include the droids, he pushes play, and we see and hear Princess Leia giving instructions to the droids before they emerge. Jumping around, we see what Boba Fett was doing (blasting some folks) before he runs into the droids.
Bredow says that this technology is great for telling shorts where users can examine multiple storylines that happen at the same time. The Boba Fett or droid scenes normally would have "been left on the cutting room floor," Bredow says. But with VR and dynamic rendering, viewers can explore these hidden plot points to get a better understanding of the story.
While ILM was showing off its in-house tools, Oculus was busy giving stuff away to filmmakers. During the keynote, Oculus announced that it would be basically open-sourcing the assets used to create it's VR experience Henry. While the story of Henry—a cute melancholy hedgehog who just wants a friend to be able to hug him without fear of being impaled by his spines—wasn't particularly deep, the experience itself was a great proof of concept. It was pretty much like being inside a Pixar short.
Henry, the hedgehog. (Oculus)
Offering up the experience as a boilerplate for other creators to learn how to create VR experiences, while sounding very open, is somewhat analogous to giving away code examples to programmers. By seeing how an entire application works, a developer can use some of the same methodologies or tools to solve their particular problem. Creators who are interested in VR production will be able to use Henry in much the same way.
I got a chance to talk to Eugene Chung, founder of Penrose Studios at the conference at the developer lounge. He was the only VR film developer in the room, which was dominated by game developers. I wanted to pick his brain about what he thought about VR as an artist.
Chung showed me a short called The Rose and I that Penrose had produced, based on the classic story of Le Petit Prince (The Little Prince). Much like Henry, it was more of a film than a game, though it was rendered dynamically and you could move around in the environment and look in all directions.
From what I can tell, what we call passive VR "films" is still up in the air. "VR experience" seems to be a popular term, but "VR film" has been used here and there, too.
"It truly is a new art form," Chung told me. He likened using VR to the emergence of film. "I can tell you that just as cinema is its own language, VR is its own language." VR as a medium, Chung said, was just as different from film as film was from theater. The storytelling remains quite the same, but how you visually represent stories is quite different.
That doesn't mean that VR will kill the movies. People still go to plays, after all. The VR experience is wildly different from seeing a film on the big screen. As it is now, there's no real replacement for hugging your significant other or laughing with your friends while watching a film together.
Creative tools
ILM's programs weren't the only creative tools that were highlighted at Connect. If there was one non-game that stole the show at Connect, it was Medium. The program serves as Oculus's "paint program for VR."
I got to play with Medium, and I thought is was actually the best use of Oculus Touch that I experienced (to be fair to other devs, there are only so many demos you can try in a given time). While I played in Medium with an Oculus software engineer who works on the project, I felt a childlike joy as I discovered the tools needed to create a (admittedly poor) sculpture. As much as I love the Eve: Valkyrie demo for its fulfillment of a childhood fantasy of being a starfighter pilot, Medium touched me on another level.
Wes Fenlon from PC Gamer tries out Medium at Oculus Connect.
It seems straightforward enough: You're in a room, and you can create 3D objects with a palette of tools found on your off-hand. Once your tool and color is set, you can add material to the 3D space. It just floats there, defying gravity, which should make creating virtual pottery much easier than the real thing. When you're done, Medium allows you to save the object to an .obj file, or other 3D object files. You can then send that file to a 3D printer, plop it into a game, or use it as a 3D asset in filmmaking.
The engineer had admin control of the experience, and at one point she revoked my use of the tools to demonstrate a different tool—the symmetry plane. As useful as the plane is for creating objects that won't look lop-sided, I was more taken aback by the admin abilities of the program. I instantly thought of a teacher showing sculpting or digital art students how to create a specific shape, without fear of the students altering the object before she was ready.
While great for artists, Medium in its current state is limited for designers. It's really tough to get straight lines or exact curves like you can get in Maya or AutoCAD. Oculus software engineer Lydia Choy said that straight-edge tools and the like are in development.
Even with its limitations, thoughts wandered to people with disabilities. I have a friend who gets severe pain in his hands, which precludes him from doing basic things like typing for long periods. This friend is an artist too, and the loss of his hands as useful tools had been particularly hard on him. Medium with Oculus Touch offers a solution where a person can create 3D art with very little physical effort. Besides the fact you don't have to hold up a heavy object to carve it, the Touch controllers are easy to actuate and are light enough to use for a long enough time to be useful.
Other creative applications like Medium could have far-reaching implications, especially if developers integrate a social element to it. Working on an object by yourself is cool, but it's way better if you can do it with someone else. Having a spare set of eyes and other ideas could help engineers, artists, and even doctors.
Gamification of Training
I talked to a pair of game developers from Bossa Studios, Sylvain Cornillon and Henrique Olifers, about creating games for VR.
Bossa Studios makes the VR game Surgeon Simulator, where the player is tasked with slicing and dicing patients. As with many games, this allows people without a medical license to do thing they'd otherwise never get to experience.
Like many VR games for Oculus, Surgeon Simulator is "tabletop" style, meaning that you play on a virtual surface, looking down at the objects you interact with. This play style is popular since the Rift doesn't lend itself to moving around very much.
Surgeon Simulator lets you chop up humans and aliens in search of squishy organs.
Both Olifers and Cornillion said that keeping the player in one place, while a limitation on its face, allows for a lot of creativity.
"The level of detail for objects must be high," Cornillion said. The two men noted that in many FPS games, players often sprint right past small objects like bottles or barrels that an artist had to work on. In VR, the player spends a lot more time looking at those objects, so they have to be higher in detail, and artists can justify putting a lot of sweat equity into creating the objects.
"You can have a lot of fun in one space," Olifers said.
Getting off the gaming subject, I asked Cornillion and Olifers if they thought Surgeon Simulator could be used in a training environment for EMT, military, or medical students. They said it was entirely possible, though not with the game in its current form.
This isn't surprising. Pilots have been using simulators and VR to train for decades. It would make sense that as the technology improves, training in VR for surgeons or other people who use their hands could be a cost-effective supplement to"real" hands-on training. After all, electrons are cheaper than cadavers. At least, I hope they are.
What's in it for us
With all the tools becoming available to developers and content creators, the clear winner is the end user. As it was with graphics technology, gaming will likely drive the bleeding edge of VR development. However, the big money will be in content, and likely the passive type.
What we're witnessing with VR headsets like the Gear VR, HTC Vive, and Oculus Rift is the creation of a new medium. Oculus, and by extension Facebook, is clamoring to make sure that there will be plenty of content for this its platform as VR is unveiled to the wider public over the next six months. What this means for media consumption is really anyone's guess, but the new medium has clear advantages and pitfalls.
On the upswing, VR allows us to be more social over distance. Sure, people are "social" via text and photography on Facebook or Twitter, but there's something more intimate about watching a match on Twitch in a virtual room or working on a virtual sculpture together. Seeing someone's avatar does fool you into thinking there is someone else physically there. The simple act of waving at the engineer in Medium was enough to convince me that she was actually there, in the room with me.
Even the use of VR video or VR experiences has the potential to entice our sense of empathy. While sitting on the train to catch my flight to Los Angeles, I listened to a TED Radio Hour episode about screens. The podcast mentioned a program where the UN shot a spherical video in a Syrian refugee camp. Watching the video in VR and seeing the children wave at the cameras had some diplomats who watched the video in tears.
Say what you want about the subject matter, but the ability to feel presence in VR creates more empathy for others. The sense of presence is more connective than seeing things through a rectangular portal. As one presenter noted at Connect, in VR, you can't look away. That itself may have an immense power to connect us as we tell stories, play games, or create. Facebook has good reason to be bullish on its investment in VR; its working hard to be the dominant force in the space with Oculus.
That doesn't mean other VR vendors won't have plenty of room to create that sense of presence and magic. They will face a battle that mirrors that of gaming consoles: As long as the hardware is up to snuff, the array of titles and content available will the primary factor that makes or breaks a particular platform.
On the flip side, VR is very isolating. It's the most anti-social piece of technology I've experienced, if we're talking about the physical room I'm sitting in.
When our press group headed upstairs for the Gear VR demo, the room was set up to resemble some classy, futuristic lounge. But instead of people crowding around tables having drinks and talking, people lounged in chairs, alone in their VR experience. This seemed like a nightmarish dystopian cyberpunk scene, where people got their doses of digital Soma.
Cyberpunk utopia or digital dystopian nightmare in the making?
I had a mixed feeling of "Wow, this is awesome" and "Holy crap, is this where we're headed?" as I looked around the room. The Gear VR lounge arrangement was wildly different than the sectioned-off rooms that Rift demos usually take place in. The whole experience was slightly unsettling, like watching the Matrix slowly come into reality.
Setting any techno-fear aside, there are some serious drawbacks about some of the experiences themselves. I can't experience Netflix in VR the same way I can on the couch with my fiancee. I imagine that if I sat through two episodes of Narcos in VR, she's be pretty unhappy with me. With the Twitch demo, it only really has utility when used socially. Watching Twitch in VR by myself for hours isn't something I think I'd like to do.
Stepping into VR is like stepping out of this reality for a bit. You're here but there at the same time. This creates an enormous opportunity for immersive experiences unlike any other medium we've had in the past.
We know what it's like to see things on a rectangle. We've been doing it for 100 years. Those rectangles have changed the way the world works. Depending on its adoption and how content creators approach it, VR may follow a similar course.
This is the Wild West period for VR. Companies are staking their claims and developers have a brand-new world open to them. In the rush to populate the new medium with content, one has to wonder if the technology will bring us closer together or push us yet further apart.