General Gaming Article |
- Netgear Expands ReadyNAS 200 Series
- Skype Translator Finally Out Of Beta
- Viewsonic VX2475 Smhl-4K Review
- How To: Make IE 11 Your Default Browser, and More
- How To: Make Chrome Your Default Browser, and More
- How To: Make Firefox Your Default Browser, and More
- AMD Fixes Memory Leak with Catalyst 15.9.1 Beta Driver Update
- What I Learned about VR at Oculus Connect
- Newegg Daily Deals: Crucial 500GB SSD, Acer Core i3 Desktop, and More!
- Batman: Arkham Knight Returns to PC End of October
- Router Virus Seemingly Fights the Good Fight
- Fast Forward: Wireless Charging = Wireless Waste
Netgear Expands ReadyNAS 200 Series Posted: 02 Oct 2015 02:03 PM PDT Netgear on Thursday expanded its ReadyNAS 200 Series of Network Attached Storage (NAS) products with the launch of two new units: the ReadyNAS 212 (RN212) and the ReadyNAS 214 (RN214). Both are ideal for the home and home office, packed with a quad-core ARM Cortex A15-based processor clocked at 1.4GHz and real-time 1080p high-definition video streaming and transcoding. Both are available to purchase now from online outlets such as Amazon, Fry's and Newegg. Sold with or without the hard drives installed, the 212 model provides two bays and supports up to 12TB of storage whereas the 214 model features four drive bays supporting up to 24TB of storage. Both units include 2GB of RAM, three USB 3.0 ports, one eSATA port, and built-in virus scanning. They also include two gigabit Ethernet ports that support Link Aggregation, meaning the two ports can work in parallel to offer more throughput. According to the company, the two NAS units provide read speeds of up to 200MBps and write speeds of up to 160MBps when using a RAID mode. Additional features include a "professional-grade" BTRFS file system, media server capabilities with support for DLNA, iTunes and Plex, and "five levels of data protection" consisting of bit rot protection, snapshot technology, automatic backup, and more. There's even Time Machine backup support for Mac customers and free mobile apps for accessing the NAS units from a smartphone or tablet. One of the big selling points is the NAS units' ReadyCLOUD service, which allows users to retrieve data from the storage unit no matter where they are and from nearly any device. "ReadyCLOUD for ReadyNAS 212 and 214 is the only personal cloud that embeds a VPN tunnel with zero-configuration setup while offering access, sharing and synchronization capabilities. You can easily sync folders between your ReadyNAS and PCs, and also benefit from Time Machine backup support for all the Macs in your home," the company adds. Is a NAS device right for your home or office? That's a good question. Many consumers may be just fine using a single external drive to back up their data or share media with devices connected to the local network. However, NAS devices are ideal for those who wish to not only store large amounts of data (pictures, audio, video, etc), but to easily serve up this data in and out of the home or office. Naturally, the larger the capacity the larger the hit will be to the wallet. NAS devices are great investments for long-time data storage, but they can get expensive. Netgear's ReadyNAS 212 model without hard drives costs $330 whereas the ReadyNAS 214 model without drives costs $500. Apparently, the company will eventually sell these NAS units with hard drives installed, as the company lists configurations and "buy now" buttons that are currently grayed out. | ||||||||||||
Skype Translator Finally Out Of Beta Posted: 02 Oct 2015 02:00 PM PDT Microsoft's Skype team stated on Thursday that Skype Translator has come out of beta and is now rolling out to the Skype for Windows desktop app. Skype Translator aims to break down the walls of communication starting with six voice languages – English, French, German, Italian, Mandarin, and Spanish – and 50 messaging languages. "Since December 2014, when we released the Skype Translator preview app, hundreds of thousands of people have used the app and have given us instrumental feedback," the team said in a blog on Thursday. "Thank you to everyone for this amazing input!" The blog states that Skype Translator is powered by machine learning technology. The more it's used, the smarter the technology becomes. The team said it saw big improvements in the language conversion as the beta testers used the app each day and will likely see loads of refinement as Skype Translator is dished out to a "broader" audience. Microsoft provides a few examples in its report, stating that a PhD student enlisted in the beta "enhanced" his thesis research with the help of experts located in other countries. The company also says that a world traveler based out of Australia managed to find his way across a number of continents using the beta and translated "key phrases." A shop owner uses Translator to purchase goods through Skype's instant message service. "It has been a long-time dream at Skype to break down language barriers and bring everyone across the globe closer together," the Skype team adds. "Researchers, engineers, and many others across Microsoft have been working hard to make this dream a reality and we are looking forward to bringing this preview technology to more devices." Skype customers using the Windows desktop app will see the new feature within the next few weeks. They'll know Translator is up and running when they see the associated icons located under the current Video Call and Call buttons. To see how Skype Translator is set up and used, check out the video below! | ||||||||||||
Viewsonic VX2475 Smhl-4K Review Posted: 02 Oct 2015 01:58 PM PDT at a glance(+) 4K (-) No Way This article was published in the November 2015 issue of Maximum PC. For more trusted reviews and feature stories, subscribe here. What's a few inches between friends?This whole 4K thing is all about compromises right now. Forgetting the fact that running games at the mighty 3840x2160 native resolution of 4K monitors is incredibly demanding on your graphics hardware, just trying to find the right monitor in the first place can be incredibly tough. The first 4K panel we checked out was a frighteningly expensive Asus 32-inch IGZO panel; great-looking but wallet-destroying. From there it was either weaker panel tech in TN trim for the larger screen sizes, or too-expensive IPS technology. Or you could compromise and go for a smaller screen space paired with a finer panel. This is the route Viewsonic has taken with its latest 4K monitor. Matching Samsung's PLS panel technology (IPS by any other name would smell as sweet…) with a 24-inch screen size means Viewsonic can offer lovelylooking image fidelity for around $400. In 4K terms, that's a bit of a bargain. Pretty much any other 4K panel you'd care to mention at that price will be resolutely TN. Not that today's twisted nematic tech is as bad as the old days—both the color reproduction and viewing angles are much improved—but it's still nowhere near the image quality you'll get from a bona fide IPS screen. And the oh-so-similar PLS technology in this Viewsonic panel is top-end 8-bit style, so it's rocking 16.7 million colors and full sRGB color depth. The white reproduction on this screen then is pretty much immaculate, and the contrast levels typically excellent. As is the way with this sort of panel tech, however, the black levels aren't quite up there with the depth you'd get from an MVA screen like the lovely and large Philips BDM4065UC. But when you're talking about color reproduction and viewing angles, the Viewsonic's got it. So, all is rosy then, right? The Scaling SituationIf this monitor was just a little bit bigger—say, 27-inch—we'd be all over it. With a quality display, we'd say 27-inch is about the minimum screen size you can really get away with for a 4K monitor. And even that is pushing it. You don't really get the same stunning effect that the extra pixel count has on image depth with a 27-inch panel that you do with something like that 40-inch Philips, let alone a 24-inch screen such as this Viewsonic. At that size, you're simply not getting the most out of the 4K resolution when we're talking gaming. The incredible demands that native resolution makes upon your graphics card are only worth the GPU effort when you're really getting to see the full benefit from the extra texture detail. And on the desktop, the native font on a 24-inch display is eyestraining, to say the least. Windows 10 has of course improved its scaling efforts, but you'll inevitably come across older software that simply doesn't work with the new UI scaling boost. We use FRAPS in hardware testing almost constantly, and if you ever try running that classic app on a scaled desktop, you'll see what we're talking about. With a $400 price tag, we can forgive it the flimsy, plasticky chassis. There's no height adjustment or twisting here, this is a basic setup. Until you get to the inputs that is. Incongruously, this is the first panel we've had in the lab that rocks the new HDMI 2.0 interface, which will support the 4K resolution over HDMI at 60Hz for the first time, providing you've got a compatible GPU. With just a few extra inches across the diagonal, we'd have a seriously impressive 4K package. Unfortunately, at 24 inches, the screen size is too small. It's just too big a compromise. $400, www.viewsonic.com
| ||||||||||||
How To: Make IE 11 Your Default Browser, and More Posted: 02 Oct 2015 01:54 PM PDT So, you want to use Internet Explorer 11 in Windows 10, but you're not sure how to set it as the default browser. The first trick is to actually find the desktop program, which you can do by simply typing "Internet Explorer" in Cortana's search field. After that, here are instructions on how to make it the default in Windows 10, how to clear the history, and how to clear the cache. Make Internet Explorer 11 your default browser:1. Hit the gear icon sitting between the star and smiley icons to bring up the Tools menu, as shown above. 4. Click the "Make Internet Explorer the default browser" link. 5. Locate Internet Explorer in the list and click "Set this program as default." Clear the browser history:1. Click the gear icon to bring up the Tools menu. 4. Click the Delete button to choose what Internet Explorer can delete. Clear the browser cache:1. Click the gear icon to bring up the Tools menu. 5. Note: To change the location of the cache, change its size, or to view its contents, hit the "Settings" button, as shown above. | ||||||||||||
How To: Make Chrome Your Default Browser, and More Posted: 02 Oct 2015 01:49 PM PDT Are you a Google Chrome user and want to make it the default application for surfing the Internet? We show you how to make that change in several easy steps, along with how to clear the browser cache and history. Make Google Chrome your default browser:1. Click the button to the right side of the address bar that features three lines. This button will read "customize and control Google Chrome." 4. Click the "Make Google Chrome the default browser" button. 6. Click the current browser and choose Google Chrome in the list, as shown above. Clear the browser history:1. Click the button to the right side of the address bar that features three lines. 5. In the new popup window, check the "Browsing history," "download history" and if needed, "Cookies and other site and plugin data." Clear the browser cache:1. Click the button residing to the right of the address bar that features three lines. 5. In the new popup window, check the "Cached images and files" option." | ||||||||||||
How To: Make Firefox Your Default Browser, and More Posted: 02 Oct 2015 01:42 PM PDT Want to make Mozilla's Firefox browser your default gateway to the Internet? Want to cover your tracks and delete your browsing history? You've come to the right place, as we lay out instructions on how to make Firefox your default browser, how to clear the history, and how to get rid of the cached files. In the end, you'll come out with more hard drive space and peace of mind that no one knows your browsing habits. Make Mozilla Firefox your default browser:1. Click the Menu button on the far right that features three lines. 4. Click the "Make Default" button. 6. Choose Mozilla Firefox in the "Choose an app" menu. Clear the browser history:1. Click the Menu button on the far right that features three lines. 6. In the "Details" drop-down menu, make sure both "History" options are checked. Clear the browser cache:1. Click the Menu button on the far right that features three lines. 6. In the "Details" drop-down menu, make sure the "Cache" option is checked. | ||||||||||||
AMD Fixes Memory Leak with Catalyst 15.9.1 Beta Driver Update Posted: 02 Oct 2015 01:16 PM PDT Let's try this again
AMD recently released a new Catalyst driver in beta form, version 15.9, which contained "performance and quality" optimizations for the Star Wars: Battlefront beta and DirectX 12 optimizations for the Fable Legends: Benchmark. Good stuff, except it also introduced a memory leak. "We are aware that some users are experiencing an issue in AMD CAtalyst 15.9 Beta that causes all available video memory to be used whilst resizing active browser windows," AMD stated in a support document. "Thank you for bringing this issue to our attention and being patient whilst we continue to investigate. We're working on getting it resolved as soon as possible." At the time, AMD advised Radeon graphics users to roll back to Catalyst 15.7.1, though that's no longer necessary. AMD has a new Catalyst driver available, Catalyst 15.9.1 Beta. It offers all the same performance benefits and fixes as the previous beta, but without the memory leak issue. You can download the driver here. | ||||||||||||
What I Learned about VR at Oculus Connect Posted: 02 Oct 2015 12:48 PM PDT Insights into what's coming in the world of VR from Oculus ConnectThe thing about virtual reality is that it's hard to describe to people who have never experienced it. Imagine something awesome that you've never experienced, and is wholly subjective. It's similar to trying to describe the effects of drugs or alcohol to people who have never tried them. Sure, you can describe the effects in a technical and medical manner: dizziness, possible nausea, euphoria—but you can't tell them about what life's like after two and a half whiskey sours. That part is subjective. In much the same way, you really have to experience VR to get it. That experience is up to the wearer. My first VR experience was with the humble little Google Cardboard, just a little over a year ago. I was surprised that a simple Android phone could create an immersive experience that vivid. When I tried the Oculus Rift and HTC Vive, both products blew my hair back in terms of image quality. The Samsung Gear VR, while really not much more than a really, really fancy Cardboard headset, offers better interface controls and optics than Google's corrugated paper solution. That's not to say it's not breathtaking, but the Gear VR is notably less awesome than its bigger brother, the Rift. However, at $100, the Gear VR will offer a very good VR experience with a relatively low barrier to entry (assuming you've got a Samsung Galaxy S6). The Samsung Gear VR offers a surprisingly good VR experience at a fraction of the cost of the Oculus Rift. When I drove up to the Loews Hollywood Hotel in Los Angeles, I didn't quite know what to expect from Oculus Connect. After all, what could be announced that would be groundbreaking? The Rift release window? We already know it'll be Q1 2016. The Oculus Touch controllers? Q2. There were a few partnerships to be announced, as well as the pricing for the Gear VR ($100) and the release date (November). What this conference was really about was content. When we're talking content, we're not just talking about games, though games are the easy low-hanging fruit that you'd expect in VR. The fact is, while gamers may rush to pick up VR headsets, the big money sits outside of gaming. John Carmack—Oculus' chief technology officer and the guy who birthed the first-person shooter when he wrote the code for Wolfenstein 3D and Doom—gave a dense, stream-of-thought keynote at Connect that had nerds everywhere listening. When even Carmack is talking about content and video, not just game engines and lighting polygons, you know something is up.
John Carmack, Oculus's chief technology officer. The big money—and this is where Oculus is apparently trying to makes its mark—is in the kind of content my mom would consume. That means Netflix and other things that won't necessarily require a whole lot of iteration. Oculus's deal with Netflix (along with the Minecraft deal with Majang) was the biggest business news at Connect, by far. Combined with Twitch, Facebook just got distribution deals with some of the biggest video content players on the Internet. While the deals with Twitch and Netflix are huge, those are just the tip of the iceberg when it comes to non-game content. The two big streaming services, as awesome as they are, still focus on delivering 2D video, within a virtual living room. To some, that may seem like a bit of VR hubris gone too far: Most people can already watch Netflix in their living room. That doesn't mean that Facebook isn't banking on VR being a big thing. VR StreamingOne of the talks in the first round of sessions at the TCL Chinese Theater was all about streaming VR video. With a hundred or more developers and content makers packed into the movie theater for the talk, it was readily apparent that there are plenty of people who could be working on creating content to stream to VR users. VR streaming seems easy enough on its face, but in reality, streaming VR presents a set of challenges that regular video streaming just doesn't have to deal with. David Pio, a video streaming engineer at Facebook gave the talk, and explained the methodology Facebook was using for 360-degree VR video streaming. First off, it really helps to imagine what a VR video would look like as a geometric object. From the viewer's perspective, the VR experience should be a sphere, since you can look in any conceivable direction. But since when did cameras capture video in spheres? They don't. Instead, software has to stitch together all of these rectangles into a cube, and do a little logical magic to make the video appear as spherical as possible. But that cube has way too much data for the average 5Mb/s Wi-Fi connection, as Facebook put it. To get down to 5Mb/s, Facebook has to compress that video and discard most of that cube. Pio said Facebook first approached it by lowering the quality of the video out of the user's field of vision and using blurs in the peripheral field of view of the user. They also had to rethink how they buffer video: Instead of buffering 10 or more seconds of video like YouTube does, they buffer one second. That second is looking in one direction, with the other directions reduced in quality. If the headset moves, the video is still visible, but either blurry or at noticeably lower quality. Pio said that this blurry or low-quality video was preferable to having a blank screen when you move your head rapidly. Luckily, as the headset moves, the movement data is sent back up to the server, which then sends down another second of video, this time looking in the new direction. With the constant polling of headset direction, buffering more than a second of video becomes a monumental waste of time and resources. Even with the polling and reduced quality cube, there's still too much data flowing down the pipe. Facebook chose to address this by reducing the cube to a pyramid, with the base as the primary, in-focus viewing area. Using a pyramidal shape rids the compression of having to do anything with the "back" wall that the user can't see, and allows some more aggressive compression of the other walls as they are reduced to triangles. The VR streaming video pyramid. When the server calculates this pyramid, it unfolds the pyramid and streams the video as a rectangle. The decoder on the client side then re-folds this video into the pyramid and outputs the video to the VR headset. A single frame in the second that is streamed contains the directional data and tells the client how the pyramid was unfolded. The geometry and video polygon location within the streamed frames can change based on head movement, and compression efficiency for a given region of video. Voilà . VR streaming video that's much closer to the 5Mb/s threshold. While logically reducing to other geometric shapes—like a cone—has been considered, Pio said that the pyramid had yielded the best performance and overall quality. While the technical parts of VR video streaming is interesting itself, the implications are numerous. By making VR video "cheap" enough—in terms of resources and bandwidth—it opens the doors for mass adoption of the medium. Imagine live, streaming video where you can be court-side—right next to Jack Nicholson—at the Lakers game, from a hotel room in Boston. Imagine diving on the Great Barrier Reef with researchers from your living room. Those are the types of experiences that become possible with this technology. The New CinemaWhile Oculus Connect is primarily a developer's convention, there were plenty of content creators and creative types in attendance, too. And for the second talk on the first day, people crammed into the theater where Facebook's David Pio had just given a talk on streaming to see Rob Bredow talk about how Lucasfilm and Industrial Light & Magic are using VR to design experiences as well as plan shots. I think most of the people came to the talk because, you know, Star Wars. That wasn't far off, since Bredow showed lots of video of how Lucasfilm and ILM was using VR technology to tell stories within Lucas's far, far away galaxy. One of the first tools that Bredow showed off was the use of VR to scout locations, using an in-house tool called V-Scout. The scouting tool allows directors to plop down digital assets on a landscape and move around within it to find the best angles and determine movement of action. The tool, he said, can mimic various Panavision lenses, to give the user an idea of how the scene will actually look when shot. Scouting locations is an expensive part of filmmaking, and being able to do it remotely using topographical data and VR imagery could cut costs for film production, Bredow said. The other really impressive tool that ILM showed off was the live rendering process they developed. The live rendering can capture human actors in a motion capture suit, and plop those actions into a digital scene. This isn't too unlike what games already do: dynamically render 3D scenes as they are played out. But Bredow was quick to point out that this tech was quite different, and didn't focus on viewer interaction. In one demonstration, Bredow showed a scene that had been scripted using this technology. A squad of stormtroopers patrol a desert village, looking for Rebel droids, of course. R2-D2 and C-3PO emerge from a shadowy house, requesting a pickup from a Rebel ship captain. In the distance, a ship holds off an approaching AT-AT. As the droids turn to find an alternate route, they are confronted by none other than Boba Fett, and the scene ends. While the video looked good and near film-quality as it was, Bredow backed up to show what made this so cool: With VR, you can see any part of the scene you like, and aren't tied to what the "camera" shows you. After he restarts the demo, he pauses the "video" and moves the camera perspective around different sides of the stormtroopers. From there, he looks around to the house where the droids are hiding. After moving the perspective to include the droids, he pushes play, and we see and hear Princess Leia giving instructions to the droids before they emerge. Jumping around, we see what Boba Fett was doing (blasting some folks) before he runs into the droids. Bredow says that this technology is great for telling shorts where users can examine multiple storylines that happen at the same time. The Boba Fett or droid scenes normally would have "been left on the cutting room floor," Bredow says. But with VR and dynamic rendering, viewers can explore these hidden plot points to get a better understanding of the story. While ILM was showing off its in-house tools, Oculus was busy giving stuff away to filmmakers. During the keynote, Oculus announced that it would be basically open-sourcing the assets used to create it's VR experience Henry. While the story of Henry—a cute melancholy hedgehog who just wants a friend to be able to hug him without fear of being impaled by his spines—wasn't particularly deep, the experience itself was a great proof of concept. It was pretty much like being inside a Pixar short. Henry, the hedgehog. (Oculus) Offering up the experience as a boilerplate for other creators to learn how to create VR experiences, while sounding very open, is somewhat analogous to giving away code examples to programmers. By seeing how an entire application works, a developer can use some of the same methodologies or tools to solve their particular problem. Creators who are interested in VR production will be able to use Henry in much the same way. I got a chance to talk to Eugene Chung, founder of Penrose Studios at the conference at the developer lounge. He was the only VR film developer in the room, which was dominated by game developers. I wanted to pick his brain about what he thought about VR as an artist. Chung showed me a short called The Rose and I that Penrose had produced, based on the classic story of Le Petit Prince (The Little Prince). Much like Henry, it was more of a film than a game, though it was rendered dynamically and you could move around in the environment and look in all directions. From what I can tell, what we call passive VR "films" is still up in the air. "VR experience" seems to be a popular term, but "VR film" has been used here and there, too. "It truly is a new art form," Chung told me. He likened using VR to the emergence of film. "I can tell you that just as cinema is its own language, VR is its own language." VR as a medium, Chung said, was just as different from film as film was from theater. The storytelling remains quite the same, but how you visually represent stories is quite different. That doesn't mean that VR will kill the movies. People still go to plays, after all. The VR experience is wildly different from seeing a film on the big screen. As it is now, there's no real replacement for hugging your significant other or laughing with your friends while watching a film together. Creative toolsILM's programs weren't the only creative tools that were highlighted at Connect. If there was one non-game that stole the show at Connect, it was Medium. The program serves as Oculus's "paint program for VR." I got to play with Medium, and I thought is was actually the best use of Oculus Touch that I experienced (to be fair to other devs, there are only so many demos you can try in a given time). While I played in Medium with an Oculus software engineer who works on the project, I felt a childlike joy as I discovered the tools needed to create a (admittedly poor) sculpture. As much as I love the Eve: Valkyrie demo for its fulfillment of a childhood fantasy of being a starfighter pilot, Medium touched me on another level. Wes Fenlon from PC Gamer tries out Medium at Oculus Connect. It seems straightforward enough: You're in a room, and you can create 3D objects with a palette of tools found on your off-hand. Once your tool and color is set, you can add material to the 3D space. It just floats there, defying gravity, which should make creating virtual pottery much easier than the real thing. When you're done, Medium allows you to save the object to an .obj file, or other 3D object files. You can then send that file to a 3D printer, plop it into a game, or use it as a 3D asset in filmmaking. The engineer had admin control of the experience, and at one point she revoked my use of the tools to demonstrate a different tool—the symmetry plane. As useful as the plane is for creating objects that won't look lop-sided, I was more taken aback by the admin abilities of the program. I instantly thought of a teacher showing sculpting or digital art students how to create a specific shape, without fear of the students altering the object before she was ready. While great for artists, Medium in its current state is limited for designers. It's really tough to get straight lines or exact curves like you can get in Maya or AutoCAD. Oculus software engineer Lydia Choy said that straight-edge tools and the like are in development. Even with its limitations, thoughts wandered to people with disabilities. I have a friend who gets severe pain in his hands, which precludes him from doing basic things like typing for long periods. This friend is an artist too, and the loss of his hands as useful tools had been particularly hard on him. Medium with Oculus Touch offers a solution where a person can create 3D art with very little physical effort. Besides the fact you don't have to hold up a heavy object to carve it, the Touch controllers are easy to actuate and are light enough to use for a long enough time to be useful. Other creative applications like Medium could have far-reaching implications, especially if developers integrate a social element to it. Working on an object by yourself is cool, but it's way better if you can do it with someone else. Having a spare set of eyes and other ideas could help engineers, artists, and even doctors. Gamification of TrainingI talked to a pair of game developers from Bossa Studios, Sylvain Cornillon and Henrique Olifers, about creating games for VR. Bossa Studios makes the VR game Surgeon Simulator, where the player is tasked with slicing and dicing patients. As with many games, this allows people without a medical license to do thing they'd otherwise never get to experience. Like many VR games for Oculus, Surgeon Simulator is "tabletop" style, meaning that you play on a virtual surface, looking down at the objects you interact with. This play style is popular since the Rift doesn't lend itself to moving around very much. Surgeon Simulator lets you chop up humans and aliens in search of squishy organs. Both Olifers and Cornillion said that keeping the player in one place, while a limitation on its face, allows for a lot of creativity. "The level of detail for objects must be high," Cornillion said. The two men noted that in many FPS games, players often sprint right past small objects like bottles or barrels that an artist had to work on. In VR, the player spends a lot more time looking at those objects, so they have to be higher in detail, and artists can justify putting a lot of sweat equity into creating the objects. "You can have a lot of fun in one space," Olifers said. Getting off the gaming subject, I asked Cornillion and Olifers if they thought Surgeon Simulator could be used in a training environment for EMT, military, or medical students. They said it was entirely possible, though not with the game in its current form. This isn't surprising. Pilots have been using simulators and VR to train for decades. It would make sense that as the technology improves, training in VR for surgeons or other people who use their hands could be a cost-effective supplement to"real" hands-on training. After all, electrons are cheaper than cadavers. At least, I hope they are. What's in it for usWith all the tools becoming available to developers and content creators, the clear winner is the end user. As it was with graphics technology, gaming will likely drive the bleeding edge of VR development. However, the big money will be in content, and likely the passive type. What we're witnessing with VR headsets like the Gear VR, HTC Vive, and Oculus Rift is the creation of a new medium. Oculus, and by extension Facebook, is clamoring to make sure that there will be plenty of content for this its platform as VR is unveiled to the wider public over the next six months. What this means for media consumption is really anyone's guess, but the new medium has clear advantages and pitfalls. On the upswing, VR allows us to be more social over distance. Sure, people are "social" via text and photography on Facebook or Twitter, but there's something more intimate about watching a match on Twitch in a virtual room or working on a virtual sculpture together. Seeing someone's avatar does fool you into thinking there is someone else physically there. The simple act of waving at the engineer in Medium was enough to convince me that she was actually there, in the room with me. Even the use of VR video or VR experiences has the potential to entice our sense of empathy. While sitting on the train to catch my flight to Los Angeles, I listened to a TED Radio Hour episode about screens. The podcast mentioned a program where the UN shot a spherical video in a Syrian refugee camp. Watching the video in VR and seeing the children wave at the cameras had some diplomats who watched the video in tears. Say what you want about the subject matter, but the ability to feel presence in VR creates more empathy for others. The sense of presence is more connective than seeing things through a rectangular portal. As one presenter noted at Connect, in VR, you can't look away. That itself may have an immense power to connect us as we tell stories, play games, or create. Facebook has good reason to be bullish on its investment in VR; its working hard to be the dominant force in the space with Oculus. That doesn't mean other VR vendors won't have plenty of room to create that sense of presence and magic. They will face a battle that mirrors that of gaming consoles: As long as the hardware is up to snuff, the array of titles and content available will the primary factor that makes or breaks a particular platform. On the flip side, VR is very isolating. It's the most anti-social piece of technology I've experienced, if we're talking about the physical room I'm sitting in. When our press group headed upstairs for the Gear VR demo, the room was set up to resemble some classy, futuristic lounge. But instead of people crowding around tables having drinks and talking, people lounged in chairs, alone in their VR experience. This seemed like a nightmarish dystopian cyberpunk scene, where people got their doses of digital Soma. Cyberpunk utopia or digital dystopian nightmare in the making? I had a mixed feeling of "Wow, this is awesome" and "Holy crap, is this where we're headed?" as I looked around the room. The Gear VR lounge arrangement was wildly different than the sectioned-off rooms that Rift demos usually take place in. The whole experience was slightly unsettling, like watching the Matrix slowly come into reality. Setting any techno-fear aside, there are some serious drawbacks about some of the experiences themselves. I can't experience Netflix in VR the same way I can on the couch with my fiancee. I imagine that if I sat through two episodes of Narcos in VR, she's be pretty unhappy with me. With the Twitch demo, it only really has utility when used socially. Watching Twitch in VR by myself for hours isn't something I think I'd like to do. Stepping into VR is like stepping out of this reality for a bit. You're here but there at the same time. This creates an enormous opportunity for immersive experiences unlike any other medium we've had in the past. We know what it's like to see things on a rectangle. We've been doing it for 100 years. Those rectangles have changed the way the world works. Depending on its adoption and how content creators approach it, VR may follow a similar course. This is the Wild West period for VR. Companies are staking their claims and developers have a brand-new world open to them. In the rush to populate the new medium with content, one has to wonder if the technology will bring us closer together or push us yet further apart. | ||||||||||||
Newegg Daily Deals: Crucial 500GB SSD, Acer Core i3 Desktop, and More! Posted: 02 Oct 2015 10:29 AM PDT Top Deal: Do you know someone suffering for slow load syndrome? It's a serious affliction with all sorts of unwanted symptoms. Luckily for them, there's a cure. Just point them to today's top deal for a Crucial MX200 500GB SSD for $160 with free shipping (normally $174). All they have to do is swap out their pokey mechanical hard drive for one (or two!) of these and they'll be cured with read and write times of up to 555MB/s and 500MB/s, respectively. Other Deals: Asus GeForce GTX 960 4GB 128-Bit GDDR5 PCI Express 3.0 SLI Support Video Card for $220 with free shipping (normally $239 - use coupon code: [EMCKAAK27]; additional $20 Mail-in rebate; FREE Heroes of the Storm w/ Purchase!) Acer Desktop Intel Core i3 4160 (3.60 GHz) 4 GB DDR3 1 TB HDD Win7 Professional 64-bit for $465 with free shipping (normally $500) WD Black 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5-inch Internal Hard Drive for $190 with free shipping (normally $198 - use coupon code: [ESCKAAK29]) G.Skill Ripjaws 4 Series 32GB (4 x 8GB) 288-Pin DDR4 Desktop Memory for $190 with free shipping (normally $195 - use coupon code: [EMCKAAK37]) | ||||||||||||
Batman: Arkham Knight Returns to PC End of October Posted: 02 Oct 2015 09:34 AM PDT Brace yourself, Batman is coming (again)
The developers at Rocksteady Studios are still working frantically to fix lingering issues in the PC version of Batman: Arkham Knight, but by the end of the month, publisher Warner Bros. Interactive Entertainment expects that sales will resume, the company announced on Steam. Batman has seen better days, at least on the PC. Just two days after launching in June, WB halted sales of the PC version on Steam and pulled copies from store shelves due to some serious performance issues. "We take these issues very seriously and have therefore decided to suspend future game sales of the PC version while we work to address these issues to satisfy our quality standards. We greatly value our customers and know that while there are a significant amount of players who are enjoying the game on PC, we want to do whatever we can to make the experience better for PC players overall," WB said at the time. It took until early September for WB to release an interim patch to existing owners. The response to the patch was mostly positive -- it fixed a number of issues, including low resolution texture bugs and hitches when running the game on mechanical hard drives, and improved performance on all GPUs. Nevertheless, there were still some bugs that needed stomped out, and it appears the developers have made significant headway since then. "While there were significant performance improvements made to the game, the teams are continuing to work on the additional updates that were outlined in our previous post. We expect these updates to be ready at the end of October, at which time the PC version will be made available for purchase," WB said. When it re-releases to PC, it will also include support for all DLC that has been released to consoles so far. | ||||||||||||
Router Virus Seemingly Fights the Good Fight Posted: 02 Oct 2015 09:06 AM PDT Good news, you have a virus!
Cyber criminals are beginning to take an increased interest in home routers and the Internet of Things (IoT) market as a whole. It's not that there's a lot of personal data sitting on such devices, but the allure of controlling all these Internet-connected gadgets is what's of interest, especially when plotting a distributed denial-of-service (DDoS) attack. However, a newly discovered virus that's taken residence on thousands of routers may have your best interest in mind. Security firm Symantec is calling the virus Linus.Wifatch (just Wifatch from here on out). It first came to light in 2014 when a security researcher noticed some unusual activity on his home router. After doing some digging, he discover a rather sophisticated piece of code that turned his router into a zombie connected to a P2P network of infected devices. Symantec did some digging of its own and found that much of Wifacth's code is written in Perl. It targets several architectures and ships its own static Perl interpreter to each one. Once a device is infected, it connects to a P2P network that distributes threat updates. "The further we dug into Wifatch's code the more we had the feeling that there was something unusual about this threat. For all intents and purposes, it appeared like the author was trying to secure infected devices instead of using them for malicious activities," Symantec explains. Symantec hasn't found a shred of evidence to suggest Wifatch is shipping payloads used for malicious purposes, like DDoS attacks. Just the opposite, it appears that Wifatch is making routers more secure, both by blocking outside hacks and attempting to remove any existing malware it finds. So it appears there's a vigilante hacker out there, a geek version of Batman, if you will. However, Symantec notes that even though Wifatch appears to be making routers more secure, it's still being installed without consent. It also contains several backdoors that the author could use for malicious purposes, if desired. "Whether the author's intentions were to use their creation for the good of other IoT users—vigilante style—or whether their intentions were more malicious remains to be seen," Symantec says. | ||||||||||||
Fast Forward: Wireless Charging = Wireless Waste Posted: 02 Oct 2015 12:00 AM PDT With 65 percent of energy wasted, wireless charging has a way to go. Imagine huge transcontinental airliners powered by wireless energy, cruising the skies unburdened by bulky fossil-fuel tanks. Such a future was imagined for 1985 in the science-fiction novel Haunted Airways, published in 1937. Instead, 1985 brought us Cherry Coke and Madonna's "Like a Virgin." Thirty years later, we still don't have those fabulous, wireless-powered airliners. But we're getting Wi-Fi in coach class, so at least there's some progress. My point is that wireless energy has been a futuristic dream for a long time—in fact, since the days of Tesla (Nikola, not Musk). The latest manifestation is wireless charging for our battery-powered gadgets. This technology began appearing at least 10 years ago. Although much has happened since then, I'm still a skeptic. Oh, sure, it works. Some mobile phones have built-in power receivers that communicate with wireless charging pads or cradles. You can just plop your phone on the pad and let it recharge overnight, without ever fumbling with a USB cable. Two problems. First, even after 10 years, the industry still can't agree on a universal standard that enables any phone to work with any wireless charger. In fact, more variations keep coming. And some are radically different, so they aren't easily merged. The second problem is that wireless charging wastes energy and probably always will. Air is simply a less efficient conductor than copper. (It's a blessing. Otherwise, life would be electrifying.) One of the latest wireless-charging systems is a big departure from the conventional inductive systems now vying for adoption. An Israeli startup, Wi-Charge, is using mirror-guided infrared lasers to transmit power at distances up to 30 feet. The transmitter focuses the laser on a receiver that has a concentrated photovoltaic cell, which converts the beam's photons into electrons. Basically, it's like shining a flashlight on a solar panel. Light goes in, electricity comes out. Of course, even the wildest Greenpeace hippie wouldn't propose generating electricity this way. The flashlight batteries would consume far more energy than the solar panel would generate. But practicality is no obstacle when consumer convenience is the goal. Wi-Charge says its laser system is 35 percent efficient, which means it's 65 percent inefficient. Two thirds of the energy input disappears into the ether. And unlike the sun's energy, it isn't free. To me, that inefficiency is a high price to pay for replacing a cable. A million users here, a million users there, and pretty soon we're talking about real megawatts. Let's be fair to Wi-Charge. Its laser system is a clever invention for some applications. For example, fire alarms, surveillance cameras, and wireless speakers are often located in hard-to-reach places where AC power isn't readily available. And the company appears to be taking the necessary safety precautions by using low-power Class 1 infrared lasers that won't accidentally burn holes in objects or people who pass through the beam. (The transmitter and receiver require a clear line of sight.) I'm less enthusiastic about charging mobile phones this way, though. Are we really too lazy to plug in a cable? Even with a wireless charger, the phone is immobilized while it's charging, so tethering isn't a severe hardship. The only arguable advantage is there's no cable to misplace. That's progress, I suppose. But it probably wouldn't inspire a 1930s science-fiction writer. Tom Halfhill was formerly a senior editor for Byte magazine and is now an analyst for Microprocessor Report. |
You are subscribed to email updates from Maximum PC latest stories. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |