This article was published in the July 2015 issue of Maximum PC. For more trusted reviews and feature stories, subscribe here.
Why augmented reality technology is more than just virtual reality's kid brother
Read the tech and games press, and the buzz is all about virtual reality. Valve revealed its long-hidden VR product Vive, built with HTC, in March. Oculus Rift, meanwhile, has been bought for a cool $2 billion by Facebook, with Facebook's owner Mark Zuckerberg calling it "the dream of science fiction" that will "unlock new worlds for us all."
Yet, quietly, people are whispering that the real story is augmented reality (AR). Influential data firms such as Juniper Research have even put figures on it. Juniper's "Augmented Reality 2015–2019" report predicts revenues of $4.1 billion for AR apps in 2019, with 1.3 billion apps in use. By contrast, Digi-Capital is advising that AR could be worth $120 billion by 2020, with VR valued at a mere $30 billion. That reflects fundamental differences in both the underlying experience, and the progress made, in each field. We'll explore why that is, and whether virtual reality has any chance of catching up.
We'll delve into the way big media corporations—including Microsoft, Valve, Google, Apple, and Sony—are looking at this space. Several have already invested billions—such as Google with Magic Leap, and Microsoft with HoloLens—while others have already walked away, such as Valve, with its deliberate pivot toward VR.
That also means examining how AR tech is currently working, and where the next steps will be. After all, low-grade AR has become commonplace in several types of mobile application and is looking to become more widespread. Digi-Capital's prediction is based on AR capturing a large chunk of the cell phone market—with over a billion smartphones already shipped, you could say $120 billion is a conservative estimate.
But then it's only talking about five years away. It took 20 years for cells to move from the Nokia Ringo, which could merely call people, to today's all-singing, all-dancing smartphones. Will AR move faster? For the lowdown, read on.
Google's Ingress certainly has good press shots, but the AR game looks nothing like this.
The core difference between virtual reality and augmented reality technology is the worlds they move us into. One replaces, the other improves. Virtual reality creates a new world for you. It may be a world identical to the one you're in now, or it may be a world built entirely from bones and elves, but it's a world that's fundamentally separate from the world we inhabit. No perception that you see through virtual reality is what's outside of that headset. You're totally immersed. It's therefore a tech for totally immersive experiences—escapism like movies or games.
Augmented reality, by contrast, focuses on the real world as the base, and builds on top of it. AR generates virtual items in this world by either using a whole mess of sensors to ensure they're correctly placed on its surfaces, or ignoring them completely and placing it on a much nearer plane. Though this sounds easier, as you don't have to generate an entire world, there are technical challenges that make versions of it just as tricky to do as VR.
AR's not so good for immersion, because the real world is always there in the background. But that makes it an excellent tool to use to add virtual things to the real world. Voice calls, advertising, mapping, social networks… or even what Tim Merel of Digi-Capital calls "a-commerce." Yes, we're talking about augmented shopping.
There are degrees of augmentation, of course. Device-led AR, where the AR is imposed on a screen that's distant from the viewer, is a mediated reality that allows developers to more simply judge the environment. At its simplest, this acts as a head-up display (HUD) that sits over the scene like a 3D movie title sits on its background. This basic tech is at the level that firms such as advertising innovators Blippar or Aireal use. It simply uses your existing device and an app, and then imposes a new image on an existing scene, for example by putting an animation of a football player near his promotional merchandise.
At a slightly more advanced level, this tech can detect a surface or shape in the environment, and use that as a marker to estimate relative depths in the scene. You'll have AR apps that use special "fiduciary marker" cards to anchor their sim, and allow the app to scale and rotate a virtual simulation to fit the environment.
The more VR-like AR requires a lot of extra tech. You need to be able to track the position of the viewer's head and eyes, and judge relative distances, to make the illusion of something virtual be convincing. Most of these AR devices use a headmounted display (HMD), which is a headset supporting a display device (or two) in front of the user's eyes. The sort of HMD we're interested in also tracks head position along six degrees of freedom—a phrase that means three components of translation (up-down, forward-backward, left-right), and three components of rotation along those axes.
There are more extreme techs in the works as well. Two different sets of contact lenses are in development, one academic, one military. The military ones are called iOptik and function much like bifocal lenses, with the twist that they're designed to work only with AR goggles. These contact lenses will allow humans to focus both on the background scene and the HUD on the goggles at the same time. Though they're being developed for the US Department of Defense, the company behind them hopes it can sell them as consumer products soon.
The more interesting academic tech comes from the University of Washington, and is a set of "bionic" contact lenses powered by radio waves and with LED displays built in. (The microfabrication process that means they self-assemble their circuitry using osmotic pressure is fascinating and totally irrelevant). At the moment, the lenses have only been tested on rabbits, so it's still at an early stage, and there are questions about the quality of the images it produces. Still, it's an impressive glimpse of the future.
Of course, this is all minor stuff, mostly built in the hope that the big technology firms will buy out the company behind it. What's of greatest interest right now is what those firms are focusing on, and what they think they can do with it.
Blippar and Aireal want to push ads in your AR experience.
Valve's Blown Out
What's most interesting about Valve's AR offering is that it's no longer Valve's AR offering. After a Night of the Long Nerds in 2013, Valve released 25 people, including its entire AR team, to focus, we presume, on VR. The two AR project leads were given Valve's permission to do whatever they liked with the tech that they'd made, which they've called CastAR. They've created a company called Technical Illusions to finish it off. CastAR is a bit different from other AR. It consists of a pair of polarized glasses with built-in projectors and cameras, and a separate retro-reflective surface studded with infrared LEDs. The camera uses the LEDs to track your head movement, so it can adjust the images that the projectors cast onto the surface. This means each polarized lens gets a different, but coherent, image. Low latency lets you do things like look around an object. In other words, it projects a self-contained virtual reality into the real world.
As a result, it's used for static purposes rather than something more mobile, like the existing Samsung Gear VR and Nintendo 3DS. CastAR's pitch video focuses on a variety of ways that it can be used: preview 3D architectural blueprints, play 3D board games remotely on unexpected surfaces, create 3D presentations, and just for use as a 3D desktop computer. As long as you put that retroflective material all over your house.
Microsoft Does Everything
You've almost certainly heard of Microsoft's contribution to the AR party—its HoloLens system for Windows 10, and possibly Xbox One. It seems revolutionary, but its use of the word "holographic" might be suspect (and mainly due to affection for Star Trek's Holodeck). Kinect seemed revolutionary in the hermetic demonstration settings you get at big tech trade shows.
HoloLens definitely looks impressive from the screenshots scattered around these pages. It's a futuristic headset that superimposes 3D creations into the world and allows you to interact with them. The impressive element here is how high-quality the images it produces are—from the videos and reports, it's utterly compelling, if nowhere near as immersive as the HTC Vive or the Oculus Rift.
The actual headset is a lot bulkier than a comparative AR headset—Google Glass, say—but then, this has the power of a true virtual reality headset because it's not doing a simple 2D overlay. It reportedly weighs around 400g, is adjustable to all head sizes, and is totally wireless. It consists of holographic lenses, depth cameras, and three separate processing units—one central, one graphics, and one holographic.
HoloLens is intended more for business functions than entertainment.
The depth cameras are built from the same tech as Kinect, but are lower power, have a wider viewing angle, and are placed around the front and sides of the headset. They track both the user's head and hands, as HoloLens is controlled entirely by gesture and voice, Minority Report–style. This lets you interact with the 3D virtual models of the apps, from building blocks in Minecraft, to sculpting the bodywork of a motorcycle. MS is working on "pinning," which will let you stick these models in place in the environment, so you can move around them, and "holding," so you can pick them up and manipulate them.
The apps are really what wowed us when HoloLens was announced. Microsoft recently bought Minecraft for $2.5 billion, and it's already made a version of it that runs on HoloLens. Similarly, NASA has an app that lets you explore Mars, and there's a version of Skype that runs on it so that a builder can explain to you why you should have spent more time in carpentry classes at school. Though the reports are mostly positive, the tech was in an early stage, and there were concerns over whether the hardware could fit into a consumer unit, and the regularity with which the illusion was broken.
That's not all, however, as Microsoft also has several other AR and VR projects underway. It definitely has an AR headset ready to go—Microsoft bought the smart glasses firm Osterhout Design Group in March 2014. And another project called RoomAlive was shown off in October 2014, consisting of a set of projectors that transformed the walls of an entire room into an interactive environment.
Digressing for a moment, there are also persistent rumors about a VR headset for Microsoft's Xbox One console. After all, the Wall Street Journal said back in March 2014 that the company already had 3D virtual reality tech ready to go. HoloLens has reduced the chances of that coming to market, but we assume Microsoft has it ready as a backup.
Microsoft has bought Minecraft and intends to use it on its HoloLens platform.
Google Does It Better
Google Glass was Google's high-profile effort in the field of augmented reality, and you might argue that it was the company's first high-profile failure, given that Glass has currently been removed from sale ahead of a redesign and a new model. The device is a set of (quite pretentious-looking) plastic and metal glasses, with a HUD projected onto it and a smartphone-like processor behind it all, which was on sale for $1,500.
Glass did everything you thought it should, like understand natural voice commands, record video, take photos, and all the update elements of your phone. It had a small touchpad on the side of the device, which let you browse a timeline of recent events. The screen was a liquid crystal on silicon device with an LED-illuminated display that used polarization and reflectors to bounce the image into your eye. It had a wide range of supported Google apps, including Now, Gmail, Maps, and Google+. All good then? The fact that it's on a forced hiatus says otherwise.
That's not all of Google's AR efforts though. It's also making simple augmented reality games for its Android phones, such as Ingress, a massively multiplayer location-based game built on Google Maps. And it has Project Tango in the wings. This is a standard tech for mobile devices that allows them to navigate the physical world in the same way we inefficient meat-bags do. It uses advanced computer vision, image processing, and special image sensors to make an end-to-end navigation technology that understands its own 3D motion in the world, can perceive depth, and use visual cues about areas or objects they know to constantly self-correct. At the moment, it's only available to core developers, but we assume it'll be integrated into next-generation Android hardware.
Magic Leap is yet another Google-funded project, coming in at $542 million, and a direct challenge to HoloLens. It's being built by a team of tech and games industry veterans, including the author Neal Stephenson and the 3D team at WETA (who made the Lord of the Rings special effects). Reports have it as more believable and solid than HoloLens. It works using a virtual retinal display, that is, a display projected straight onto the retina itself.
Magic Leap's stated aim is to reintroduce magic into our lives.
Similar to HoloLens, the simulation looked utterly convincing. The animated 3D creatures it portrayed looked detailed and sharp, and sat well in the surrounding world. And similar to HoloLens again, it ran on a huge piece of hardware (essentially a PC) sitting nearby, rather than in the headset itself. It's worth looking at the promo video to see what it's capable of. Magic Leap hasn't really been announced or promoted yet, but we're expecting it to launch in 2016 or 2017.
Sony's "Me Too" Mentality
The Japanese giant always seems to want to get involved in any new tech, but recently it hasn't been leading the market here. Its
Project Morpheus feels like a "me too" VR solution, but it'll surely work well on Playstation 4 and might actually sell well (see "What About VR," opposite). Plus, it's already experimented with AR in the form of The Wonderbook for the Playstation 3 (see "Try AR Today", on page 50).
However, SmartEyeGlass is its main foray into AR. The currently available SmartEyeGlass SED-E1 Developer Edition is very similar to Google Glass, though much cheaper at just $840. It uses "holographic waveguide technology" in 3mm AR lenses, which produces something very similar to Glass, with overlaid green text and diagrams operating at 15fps. It also has a 3MP camera that can take pictures or video.
Sony's SmartEyeGlass has a low battery life and a cheap look.
It connects to compatible Android phones by Bluetooth, and is controlled by a small, ugly-looking puck that sits on the user's lapel, which also doubles as microphone, speaker, NFC, and battery (which comes in at only 150 minutes). At the moment, we'd stay well away from this device. It's ugly as sin, with a poor battery life, and not many apps. Version two could well be worth looking out for, though.
Apple of the Eye
There haven't been any official reveals of Apple's research into VR, but then Apple is more tight-lipped than a close-mouthed clam ahead of any announcement. Apple does have several patents for AR tech—there's a very interesting one for a "transparent electronic device" that sounds very much like a piece of augmented reality tech. Examples in the patent include using the device to overlay information about a museum exhibit. Interestingly, the device would be able to make itself opaque, and only display selected elements of the background world, otherwise being a normal opaque LDC or OLED display.
That said, an analyst from investment bank Piper Jaffray (annoyingly, but understandably, investment bankers get a lot more access to tech firms than journalists do) published a report in March saying he believes Apple has a small team experimenting with the AR space, but that they think consumer AR is still 10 years off. We'll see from Microsoft and Google efforts whether they're wrong, but they might be on the money when it comes to mass-market success.
The State of the Art
As that $120 billion valuation by Digi-Capital might indicate, there's a lot of hype around AR and VR at the moment. Hundreds of firms are trying out strange new tech to augment the senses. UK firm Ultrahaptics, for example, uses targeted ultrasound vibrations on a user's skin to form tangible shapes and textures from thin air, so the users can feel them without the need for worn equipment. That, combined with the hand-detecting Leap Motion device, makes for delicately convincing sims, like brushing your hands over ghosts. For VR, we've seen every type of treadmill under the sun—giant balls, resistant pads, harnesses around the waist—anything to convince you that you're in the virtual world.
On balance, the hype is justified. It's not like the first tablets, when Microsoft launched them stillborn into the market. Too many big companies are competing here for this to not be a success for one of them. But challenges remain, and they're not insubstantial. The biggest are in shrinking the tech down to a headset, or headset-and-pack model; in maintaining persistent simulations while doing that; and in preventing object placement errors. It's likely that, after all this experimentation, smartphones will be the first devices that give us a real taste of this. As always in that field, Apple will be the company to watch. That said, Google's Magic Leap investment is considerable enough and the tech advanced enough that we'd cautiously predict it'll be first to market, albeit in a reduced form.
One prediction we're happy to make is that in 20 years time we'll be looking back at this tech the way we now look back at the first cell phones. These innvoations are going to revolutionize many things—anything that requires 3D knowledge, such as architecture or warehouse management; anything that requires management of large data sets, such as programming; and anything that simply wants to look pretty, like art or video games. Now we just have to wait for the hardware to catch up.
Try AR Today
There are many ways to try augmented reality today. As it's a more mature technology, there are some basic devices that take advantage of it already, as well as many cell phone applications to try. The Carl Zeiss VR One headset, for example, supports AR features and will work with any iOS or Android headset between 4.7 and 5.2 inches. Google Glass V1.0 may have been canceled, but that's out there too.
There's a huge array of AR apps for smartphones. One of our favorites is GoSkyWatch Planetarium for iPhone and iPad. This is one of many stargazing apps that use the device's accelerometer and GPS to orient your device, so wherever you're pointing, it shows constellations, stars, and nebulae. See also Anatomy 4D, Google Goggles (which can translate text on the fly), Field Trip (which lets you know about nearby attractions), and iOnRoad Augmented Driving, which gives speeding alerts, crash warnings, and driving analytics.
The Playstation Vita has AR features, and comes with a package of free AR games, such as Table Ice Hockey and PulzAR. Similarly, the Nintendo 3DS comes ready-loaded with AR Games and six AR cards. Every game is superimposed on the real world, but has no interaction with it. It's more of a gimmick, really.
If you've got a PS3, you could pick up a copy of Wonderbook. It was a Harry Potter–inspired AR tome with blank pages that only filled when viewed on your TV through the Playstation Eye camera. Similarly, the PS4 has Playroom, a much smoother AR sandbox where you can play with small robots that are running around your living room. Kids love it.
You can also try the Kinect system, on both Xbox 360 and Xbox One. Though it never got the backing it deserved from developers, it has a uniquely detailed depth camera that means it can track your entire body shape—or several, in the Xbox One's case—on-screen. It's probably the most advanced consumer AR tech available on the market today.
Microsoft's Kinect was surprisingly under-used.
What about VR?
We've covered VR in the past, but it's worth giving you a quick status update as to where the tech is today. There are three projects that are nearing release. Sony's Project Morpheus, Valve and HTC's Vive, and Facebook's Oculus Rift.
Of these, Oculus Rift is the oldest, and several developer iterations have been released. A mightily cut-down version of it made to work with Samsung smartphones, the Samsung Gear VR, has already been released. It works by slotting a Samsung Note 4 or Galaxy S6 into a viewing device, and runs with 1280x1440 on each eye, and a 96-degree viewing angle.
Despite that, there's still no sign of the Oculus Rift consumer model. The most up-to-date version, the Crescent Bay prototype, has a positional tracking camera for your head, low-persistence OLED display (to eliminate blur), and runs two screens at 960x1080 on each eye, at 90Hz, and a 110-degree viewing angle. No release date has been announced, but late 2015 seems likely.
Sony's Project Morpheus is the quickest-developed of the three. Like its AR solution, Sony seems more concerned with getting a working version of its tech to consumers than with making it the cutting edge. The version we tried in July last year was much lower resolution and fidelity than the Oculus Rift versions we'd tried up to that point, but both companies have since substantially improved their hardware. It has a similar OLED screen running at 960x1080 on each eye, a 100-degree viewing angle, and a 120Hz refresh rate. It was very comfortable, presumably because much of the hardware was sitting in a set-top box, not on our heads. It tracked our heads using the Playstation camera, and it had true 3D audio. It's due out in early 2016 for PS4, which already has motion-sensitive controllers.
Valve and HTC's Vive headset is the most impressive. It recognizes that some of the joy of VR is in interacting with those virtual worlds, so it does two things. First, it has a pair of bespoke controllers for you to hold, allowing limited interaction. Second, it has a set of cameras that sit in the corner of your room, detect your location and any obstacles, and track your movement, as well as setting the virtual world's limit at your real world limit.
Vive has two 1080x1200 screens running at 90Hz. As the screens are narrower, you'll have a wider vertical field, and it should be lighter, as your PC will do all the processing work. Its big selling point is its pair of motion-tracking cameras, which are infrared and wireless, and are used to follow the headset's 37 sensors. This enables you to roam freely in your room and the virtual world. It works with multiple players and should be out this year.
If you want to try VR today, you can get a casing for your smartphone, like the free Google Cardboard, or a cheap third-party headset like the $45 Immerse from Firebox.
The AR Hardware
Not all AR devices share the same hardware and software, but there are some basic technology aspects they all need. First off, you need a processor to work everything out, then a transparent display to show the world and the projections, a light power source, and a variety of sensors and input devices.
The sensors can take several forms, but are mostly included as standard in smartphones. An accelerometer lets you measure impetus, a GPS measures global location, and a magnetometer or solid-state compass measures the device's orientation against Earth's gravitational field—that is, the ground. Luckily, modern smartphones contain all of those things.
For AR technologies that aren't based on cell phones, if you want all these elements, they have to be built in, which can increase the size and cost of the device substantially. If you choose to go without them, you'll lose a huge amount of functionality. It's notable that Sony's AR glasses system has a relatively large external box clipped to the user's lapel, while both HoloLens and Magic Leap have been demoed with large tabletop external units that were actually running the tech. Input systems are another challenge.
Unlike with virtual reality, the user can see their hands, so a keyboard is an option. But also unlike VR, augmented reality encourages users to be mobile. You want to look around the object and touch it, so you want your hands to be either free or holding interactive objects (like Valve's twin pointers). That means the device has to be wireless and the interface has to be voice, gaze, or mediated touch.