General Gaming Article |
- Google Group Aims to ID London Rioters with Facial Recognition Tools
- Two Gaming Technologies Explained: A White Paper Round-Up
- Hitachi Tags Enterprise-Class MLC SSD with 25nm NAND from Intel
- Destroy the Universe While Saving Lives With LHC@home
- Windows 7 Soon To Become The Most Common OS (Finally)
- SATA + PCI Express = SATA Express
- Shiver Me Timbers! Over 200,000 Pirates Sued Since 2010
- Nvidia GeForce 280.26 WHQL Drivers Now Available for Download
- SandForce to Showcase Prototype SSD Using 24nm Toshiba MLC NAND Flash Memory
- Seagate Celebrates 1 Million Solid State Hybrid Drive Shipments
Google Group Aims to ID London Rioters with Facial Recognition Tools Posted: 09 Aug 2011 02:46 PM PDT As riots and looting continue to flare up in London, a group of online sleuths have gotten together on Google Groups to track down perpetrators. The group makes it clear that is it their intention to use facial recognition to identify the rioters seen in online images. A noble effort on the surface, but it comes with its own set of ethical and practical concerns. In a lot of ways, this has a distinctly vigilante vibe to it. If this group does get access to a powerful facial recognition tool, there is the possibility of misuse. The group has been discussing the issues involved, but most appear to be going ahead with the plan. One user has offered to build the necessary software suing the Face.API tool along with public images on Flickr and Facebook. Assuming that this method was 100% accurate, which it likely would not be, there is still the chance of wrongly accusing people. A person might have been caught on film simply trying to get away from the violence, or perhaps just standing and gawking. Surely the Internet would pass the information right to police. A list of possible rioters would never end up posted for all to see. Even if it was, the Internet has a reputation for being totally reasonable... right? |
Two Gaming Technologies Explained: A White Paper Round-Up Posted: 09 Aug 2011 01:00 PM PDT Gaming, as it always has been, is in a strong state of transformation at the moment. Major developers are focusing on creating 3D ready platforms, while others, like Nintendo and Microsoft, are trying to take us beyond controllers--actually developing games that require physical movement and in-game interactions. The brave new world of gaming will be an interesting one indeed, so we decided to take a look at two of the pioneering technologies that may change games forever: Microsoft's Kinect and autostereoscopy. You can check out our previous white paper round ups here and here! Microsoft KinectMicrosoft's unique input device for the Xbox has opened up some very intriguing possibilities. But how exactly does it work?Kinect is, perhaps, the most significant product Microsoft has developed since Windows itself. It has the potential to impact not only gaming, but general computing, communications, and media, as well. It's an evolutionary platform blending sight, sound, and software that, if developed correctly into the future, could become a revolutionary UI. SightKinect's console includes an RGB camera—the same type found in webcams and cell phones across the globe. Currently, it's a device with a 640x480 resolution capable of capturing 30 frames per second. It's not 3D. An avatar, in this context, is simply a wireframe representation of the player that has been mapped with recognition points. These points correspond to the movement nexus that's available from the wireframe (wrists, neck, elbows, shoulders, hips, etc., in the case of human beings) and are what allow the system to emulate accurate player motion onscreen in real time. "Real," in this case, entails a reported 200ms lag—including screen response time—thanks to processing overhead and the usual screen refresh timing. It's possible to reduce this using a faster CPU, but in general, 200ms is right on the border of human perception. This is basically the same motion-capture process that's been used for the last decade or so in, among other things, sports games, to accurately record athletes' movement for reproduction during the game's playback. But these professional systems use keyframes to flow the motion, while Kinect's approach bypasses the static recording of pre-existing motion, instead reproducing the kinetic motion presented by the live player (in 20 points of motion) as the action proceeds. Perhaps more mundane but nonetheless important, the combination of infrared and RGB cameras also allows Kinect to provide facial recognition that can automatically log a player on to the Microsoft network as well as associate the player with a previously used avatar. A recent update, called Avatar Kinect, gives the console the power to recognize players' facial expressions and display them onscreen. In context, this ability can be used in several preconfigured venues (currently all thinly disguised chat room environments) to communicate with other players both verbally and through facial expressions. Apply notions of affective computing—which posits that systems will soon be capable of reacting to human facial expressions and emotions—and you can see why this is such a big deal. The entire Kinect console sits atop a pedestal, much like those of 1960s lava lamps. Unlike (most) lava lamps, the Kinect pedestal has a built-in tilt motor that lets the entire console move. The tilt range is about 27 degrees, and it's used in conjunction with the 57 degree horizontal field of view and 43 degree vertical field of the console's cameras to give the system a greater ability to track you as you move around. SoundAlthough you may hear a barely perceptible whir coming from the console, it's the only sound you'll hear. There are no speakers inside the Kinect. Instead, the interior sports four microphones—three on the lower-right end, and a single on the lower-left side. All four face downward. The quartet composes a spatial sound array that samples incoming audio and compares the four streams, separating background noise from speech, and different voices from each other. It's effective to about 4 meters from the console.
While noise-cancellation microphones have been around for years, Kinect faces the unique challenge of typically having TV/receiver speakers closer to the mics while the human voices are farther away. The acoustic-echo-cancellation techniques used in common speaker phones tend to work well, but the recognizable-voices-versus-background-noise scenario is the reverse of that for the Kinect. Software created by the Speech Group at Microsoft Redmond Research solved the problem. SoftwareThe Kinect console does not have a processor, which is surprising considering all that's expected of it. The console did have one when it was first announced (Project Natal in 2009) but Microsoft withdrew the internal CPU and decided to let the processing power of the Xbox handle matters. Kudo Tsunoda, the mastermind behind Kinect, insists that the add-on uses "less than one percent" of the Xbox 360's processing power. To help achieve that, Microsoft dropped the effectiveness of the camera down from the 60fps at its announcement in 2009 to 30fps at its commercial release. Still, that would put a huge burden on the software efficiency of the algorithms that run the console—except that the bulk of the overhead has been mitigated because the algorithms are located in the Xbox console as Kinect drivers. These drivers are what describe a human's position in Cartesian space, and they are what handle reverberation problems and suppress loudspeaker echoes in the stereo acoustic-echo-cancellation algorithm. They do all this and more based on comparisons to decision forests (a collection of decision trees) in conjunction with thousands of stored samples. ContinuumThere is no technical reason why a Kinect console could not be attached to any computing device that was loaded with the algorithms it needed to function. While that might be slightly difficult for the traditional BIOS/OS arrangement found in most contemporary computers, a UEFI environment would clear the way for the archetypal house of the future—run by voice commands and gestures with only its own facial recognition algorithms needed to provide security. By the time you read this, it's likely that Microsoft will have made some form of Kinect-related announcement at the 2011 Electronic Entertainment Expo in Los Angeles. Early speculation is that Microsoft's purchase of Skype might herald advanced video conferencing—such as predefined avatars with full expressions instead of true video images, to keep the CPU overhead down. And somewhere in the far-out reaches of time and space, what might a Kinect for PC/Mac be able to do with an über CPU? It's going to be an interesting future. AutostereoscopyWhen will we get 3d without the dorky glasses?One of the first (if not the first) 3D motion pictures was called Power of Love, released in 1922. A mere 89 years later, 3D technology continues to intrigue and yet struggle to gain widespread consumer acceptance. Three-dimensional production techniques have changed, theater screen designs have changed, and TV and home-theater video projectors have changed to incorporate 3D. In spite of all this progress, most modern 3D technology still requires viewers to don a pair of dorky glasses. A new technology saddled with the ungainly, but technically accurate, name of "autostereoscopy," promises to change all that and finally allow us to see 3D video with our naked eyes. Classic 3D TechnologyPower of Love was produced using an anaglyptic process. The film was produced by simultaneously shooting each scene from two different angles (about 2.5 inches apart, roughly the distance between the centers of the average person's eyeballs). The black-and-white film was then printed in two colors, red and green, and combined into a layered film on a single reel. When the film was screened, everyone in the audience was given a pair of special glasses outfitted with red and green lenses. The red lens canceled out the red version of the film and allowed the green version to pass through, while the green lens did just the opposite. The combination produced the illusion of depth of field. Unfortunately, the anaglyptic process induced headaches in some viewers; it also proved to be incompatible with color movies. Some 30 years later, with the movie studios desperate to find a means of luring people away from their television sets, the film House of Wax hit theaters in 1953 and did sensational box office. House of Wax was filmed using Edwin Land's Polaroid 3D system (it also featured the very first stereophonic soundtrack). The Polaroid 3D system used two lenses that captured light waves passing in perpendicular planes. Moviegoers wore polarized glasses that functioned like anaglyptic lenses. The 3D movie craze sparked by House of Wax petered out just a few years later, and Hollywood largely lost interest in 3D until the early 1980s. A string of schlocky "event" films—The Treasure of the Four Crowns, Jaws 3-D, and Amityville 3-D—passed through theaters, but the mania didn't last long and not even the release of 1983's science-fiction 3D classic Metalstorm: The Destruction of Jared-Syn could resurrect the popularity of the genre. The 3D glasses caused a viewer to watch a movie with his or her eyes slightly crossed, giving some people headaches. Automatic Stereoscopic ImagingDespite all the known problems with 3D glasses, most modern film studios, cinemas, and TV and video-projector manufacturers still rely on either active shutter glasses (that alternate between darkening the left and right lenses in sync with the display) or passive glasses (that filter light through polarized lenses).
Autostereoscopy (the creation of stereoscopic images automatically at the source, obviating the need for glasses) could be the ideal solution, although it's not entirely perfect either. The three most common autostereoscopic solutions available or in development today are parallax barrier, lenticular lens, and integral image. A parallax barrier screen, such as is deployed in the Nintendo DS, is fabricated by facing a display—such as an LCD—with a layer of material with slits that partially obscure each pixel. The left eye is able to see only the pixels intended for the left eye, and the right eye is able to see only the pixels intended for the right eye. When the brain combines both fields of vision, it perceives depth. A parallax barrier screen depends on the viewer sitting in an ideal position—a sweet spot—to deliver maximum effectiveness. Another problem is that the 3D illusion will collapse if the viewer moves his or her head too much. And finally, the parallax barrier blocks much of the light emanating from the display, significantly reducing its brightness. These restrictions aren't major issues for a single-user, handheld gaming device like the Nintendo DS. TVs, on the other hand, are designed for multiple users in brightly lit rooms sitting far from the display. It's not unusual for none of the viewers to be in the sweet spot. Even the most sedentary couch potato will have difficulty sitting relatively still while watching TV. And TVs need to be as bright as possible to overcome the ambient lighting conditions. Another autostereoscopic technology is the lenticular lens display. This type of display effectively puts the 3D glasses on the TV itself, with a series of very small lenses that refract light to the left and right, so each eye sees only the pixels intended for it. As with other technologies we've discussed, the brain combines the two fields of view and perceives depth. Since lenticular lens technology doesn't place an opaque physical barrier on the display, it doesn't reduce image brightness. It can also be viewed from a wider angle without losing the 3D effect, and it's more tolerant of viewer movement. Unfortunately, lenticular lens displays remain difficult and very expensive to manufacture. Integral imaging is similar to the lenticular lens concept in that it places an array of micro-lenses—one lens for each pixel—in front of the display panel, so that each lens produces a different perspective on the image depending on the viewing angle. With this technique, the eye can see not only right and left views of an object, but top and bottom views as well. The downsides to integral imaging are that it reduces contrast, and no one has come up with a cost-effective means of manufacturing the lens array (a feat nature has already accomplished and bestowed on the eyes of house flies and honeybees). The Current State of Retail 3DIf you can perceive 3D—not everyone can—and you're willing to accept its shortcomings, you can jump into the market now, confident in the knowledge that it's unlikely a major autostereoscopy breakthrough is right around the corner. That doesn't mean companies will cease their research and development efforts, but we wouldn't be surprised if another decade passes before "glasses-free" 3D becomes a retail reality. And then we'll all start waiting for the first demos of holographic TV. |
Hitachi Tags Enterprise-Class MLC SSD with 25nm NAND from Intel Posted: 09 Aug 2011 12:05 PM PDT Hitachi and Intel are fast becoming best buddies in the storage space, and why not, the two apparently play very well together. The latest effort from these two tech heavyweights is Hitachi's new Ultrastar SSD400M multi-level cell (MLC) solid state drive family. Pitched as a cost-effective alternative to those pricey single-level cell (SLC) SSDs, these new drives are built using Intel's 25nm enterprise-grade MLC NAND flash memory, Hitachi says. The SSD400M series is available in 200GB and 400GB capacities. They ship in the 2.5-inch form factor and utilize a 6Gb SAS interface. Benefits to the enterprise crowd include lower costs, outstanding write endurance (Hitachi claims 7.3 petabytes of lifetime random writes, or 10 full drive writes per day for five years), and fast performance to the tune of up to 495MB/s read and 385MB/s write speeds, and up to 54,000 read and 24,000 write IOPS. In addition, these drives also boast enterprise specific features, "including comprehensive end-to-end data protection, error correction, and error handling, resulting in the high level of reliability that is critical in enterprise systems," Hitachi says. The drives are shipping now and are currently being qualified for use with select OEMs. Image Credit: Intel |
Destroy the Universe While Saving Lives With LHC@home Posted: 09 Aug 2011 11:59 AM PDT If the scientists at CERN ever actually succeed at recreating the Big Bang and discovering that elusive and oh-so-tantalizing Higgs Boson particle, some folks reckon bad things might go down. Goodbye world-style bad things. That's probably not true, but if it were to occur, wouldn't you want to be able to stare down into the swirling vortex of doom and say "Hey, I helped make that!" Well, now's your chance – CERN's giving you the opportunity to donate your precious computer cycles to a virtual Large Hadron Collider with the newly launched LHC@home 2.0. What exactly are those geniuses doing with your processing power? First, the sexy part: LHC@home "simulates collisions between two beams of protons traveling at almost the speed of light in the Large Hadron Collider (LHC). Scientists working at CERN compare these simulations, based on their own theoretical models, with real data from the four LHC experiments," according to the program's press release. But even when CERN doesn't need your CPU for beginning-of-days simulations, your computer's being used for a good cause. "Through this virtual supercomputer, the Citizen Cyberscience Centre is providing a low cost technology for researchers in developing countries to meet challenges like providing clean water and even tackling vital humanitarian work including crisis mapping and damage assessment." Saving lives while possibly ending the world? What are you waiting for? Go check it out! |
Windows 7 Soon To Become The Most Common OS (Finally) Posted: 09 Aug 2011 11:31 AM PDT Even though Windows 7 rocks the socks off the decade-old XP and the lackluster ball of consumer disappointment known as Vista, Microsoft has had a hard time convincing PC users to make the switch to their new (well, two years old) operating system. When 2011 first rolled around, less than one in ten North American PCs rocked Redmond's latest offering. Expect that number to look a whole lot different by New Year's; one leading analytical firm says Windows 7 will be the most common OS in the world by the time 2012 rears its ugly head. Chalk the gargantuan increase up to enterprise adoption, Gartner says. After two-ish years of preparing to roll out Windows 7, businesses are finally getting around to actually doing it. As a result, Gartner predicts that 94 percent of all PCs shipped this year will be equipped with Microsoft's baby, which will boost Windows 7's overall penetration to 42 percent of the market – making it king of the OS hill. The biggest boosts should come from North American and Asian businesses. Not a Windows 7 fan? Gartner says Macs have started selling briskly, or at least as briskly as Macs have sold in recent memory; expect to find Apple's OS on 4.5 percent of all computers shipped in 2011. Gartner only expects Linux to grab 2 percent of the global market in the next five years, and that number drops to one percent on consumer rigs. |
SATA + PCI Express = SATA Express Posted: 09 Aug 2011 10:32 AM PDT If you're talking music, mashups are so, like, 2005. To be honest, we never really got into mixing Disturbed with the Backstreet Boys to begin with. But when you start talking data transfer specification mashups our ears start to perk up. Our sonic receptors are standing at full attention today, after the Serial ATA International Organization announced the development of a new specification that combines the SATA infrastructure with the PCIe interface to form a Voltron-like super-spec. The SATA Express specification (creative name, huh?) will offer 8Gbps and 16Gbps speeds and should be available by the end of the year. SAIO says the spec being developed targets SSDs and hybrid drives that are chafing at the edges of the 6Gbps SATA3 spec. Drives that don't need that kind of transfer speed – like flash memory-less HDDs – will continue to use the still-speedy SATA3 spec. "The specification will define new device and motherboard connectors that will support both new SATA Express and current SATA devices," the group's press release (PDF) says. |
Shiver Me Timbers! Over 200,000 Pirates Sued Since 2010 Posted: 09 Aug 2011 09:57 AM PDT Netflix and its all-consuming thirst for bandwidth may get a lot of the headlines these days, but don't make the mistake of thinking illegal P2P file sharing is dead. Hop onto one of the big name torrent sites and you'll find a veritable ocean of available titles being seeded by a whole heck of a lot of people. But just because the media's forgotten about file sharers doesn't mean the lawyers have; in fact, over 200,000 pirates have found themselves slapped with a lawsuit since the beginning of 2010. The vast majority of those papers have been served thanks to the hot new trend in anti-P2P tactics: mass lawsuits. The not at all biased *cough* yet incredibly informative Torrentfreak reports that since the beginning of 2010, mass lawsuits against file sharers have been filed in several states, and predominantly against BitTorrent users. That 200k number is buoyed by the lawsuit brought against 24,583 BitTorrent users by the makers of the movie The Hurt Locker. The lawsuits are filed in order get the information of the person hiding behind the infringing IP address. Once the copyright holders get names and addresses, they inform the BitTorrent user that they're going to sue their ass – unless they agree to a settlement in the form of a sizable cash payment ranging from a few hundred bucks all the way up to a few thousand bucks. Torrentfreak reports that 145, 417 of those defendants haven't resolved their case yet. Of the 50k+ cases that have been closed, not a single one has made it to the courtroom, even though the mass lawsuits are based on the threat of a jury trial. That works out for the copyright holders, who not only don't have to spend thousands in legal fees to go after those pesky file sharers, but actually make money hand over fist as the pirates throw cash settlements at them left and right. Of course, as Torrentfreak points out, that "means that the evidence they claim to hold has not been properly tested." |
Nvidia GeForce 280.26 WHQL Drivers Now Available for Download Posted: 09 Aug 2011 09:29 AM PDT Power users who like to live on the bleeding edge have been able to download Nvidia's GeForce 280.26 drivers in beta form for some time now. As for everyone else who owns an Nvidia graphics card? Your day has come. Nvidia's latest drivers, which put a heavy emphasis on 3D Vision support, are now WHQL certified and ready for mass consumption. The GeForce 280.26 drivers add support for a handful of new 3D Vision projections, including the Acer X1111, BenQ W710ST, and NEC NP-V300W, as well as ViewSonic's V3D245 3D Vision monitor. Nvidia also shoehorned over two dozen 3D Vision game profiles, and updated three others (Crysis 2, Deep Black, and Super Street Fighter IV: Arcade Edition). In addition to plenty of love for 3D Vision, Nvidia addressed a bunch of issues, both for single GPU and multi-GPU users. You can read the full list of changes here, and grab the latest drivers for your Nvidia graphics card here. |
SandForce to Showcase Prototype SSD Using 24nm Toshiba MLC NAND Flash Memory Posted: 09 Aug 2011 09:13 AM PDT SandForce has built quite a name for itself by building high-end solid state drive controllers employed in a number of enthusiast level SSDs, and the company doesn't show any signs of slowing down. After launching its second generation SF-2200 (SATA 6Gbps) and SF-2100 (SATA 3Gbps) chipsets earlier this year, SandForce says it's now prepared to demonstrate a prototype SSD built with Toshiba's 24nm multi-level cell (MLC) NAND flash memory. The demonstration is set to take place today through August 11 at the Santa Clara Convention Center. SandForce's prototype SSD combines the company's SATA 6Gbps SF-2000 series processors paired with Toshiba's 24nm Toggle Flash memory operating at 166 meta-transfers per second (MT/s), resulting in balanced read and write speeds of up to 500MB/s and up to 60,000 IOPS. "As the principal inventor of NAND flash memory, Toshiba is constantly evolving this technology to be the highest quality and most cost effective media for SSDs by working closely with innovative companies like SandForce," said Shigeo Ohshima (PDF), Technology Executive, Memory Design and Application Engineering, Toshiba. "The SandForce SF-2000 SSD processor, combined with our new 24nm NAND flash memory provides an optimal SSD solution to enable accelerated deployment of thin-and-light notebooks as well as mainstream enterprise applications." According to Toshiba, this pairing is supposed to offer 1.9 times faster read and 1.5 times faster write speeds whem compared with current 32nm SSDs, SoftPedia reports. That would surely benefit SandForce's already strong share of the SSD market. SandForce says it has shipped "well over 2 million" chipsets in the past 18 months. |
Seagate Celebrates 1 Million Solid State Hybrid Drive Shipments Posted: 09 Aug 2011 08:46 AM PDT Do you go for oodles of affordable storage in your next PC build with a mechanical hard drive, or raid your son's piggy bank and splurge on an ultra-fast solid state drive? You could go with both -- SSD for the OS, HDD for storage chores -- but that's the most expensive option of all. There's somewhat of a happy medium available in Seagate's Momentus XT solid state hybrid drive, of which Seagate said it shipped 1 million units since last year. Market research firm IDC says that's just the beginning. "Seagate's shipment of its one millionth Momentus XT drive is just the beginning of a bright future for solid state hybrid drives," said John Rydning, research director at IDC. "Fast, capacious, and economical hybrid HDD and NAND flash storage solutions like the Momentus XT drive will be found in roughly 25 percent of all new PCs shipped in 2015." Seagate's Momentus XT solid state hybrid drives try to combine the best of both worlds by pairing a 7200RPM mechanical hard drive of up to 500GB in capacity with 4GB of onboard solid state memory. Frequently accessed data is shuttled over to the fast storage area, in theory giving users the benefits of a solid state drive with the capacity of a traditional hard drive. Does it work? Find out by reading our review here. |
You are subscribed to email updates from Maximum PC - All Articles To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |