General Gaming Article

General Gaming Article


HP Summons Envy Spectre to Premium Ultrabook Space

Posted: 09 Jan 2012 07:46 PM PST

Even though the form factor is new, throw everything you know about Intel's Ultrabook concept out the window. Well, almost everything. Hewlett-Packard just unveiled its Envy 14 Spectre, a premium consumer Ultrabook coated with Gorilla Glass on the lid, display, palmrest, and HP ImagePad, and infused with a white glove treatment that includes a concierge service. Seriously.

No, someone from HP isn't going to come into your home and help rearrange your living room around the Spectre, but the service does give you direct access to dedicated support agents, as well as next business day shipping of replacement parts if there's an issue that can't be resolved remotely.

This is a premium Ultrabook, after all, and that becomes clear at first glance.

"Sleek, midnight black glass on the outside and stark contrast silver glass on the inside make Spectre extraordinary, defying conventional notebook design," said Eric Keshin, senior vice president, Strategy and Marketing, Personal Systems Group, HP. "We chose the Spectre name to evoke mystery, and we packed it with the best in entertainment technology to satisfy those who expect the unexpected."

The Spectre Envy 14 brings Beats Audio to the Ultrabook party (with an external jogger dial) and rocks out with the latest Intel Core i5 and i7 processors. It has up to 256GB of solid state storage, Intel Rapid Start Technology, support for two ultra fast mSATA SSDs, 4GB or 8GB of DDR3 memory, 802.11n Wi-Fi, USB 3.0, GbE LAN, HDMI and DisplayPort connectivity, and up to 9 hours of battery life. Also included is 2 years of Norton Internet Security 2012, the same one that scored a 9/Kick-Ass in our most recent antivirus roundup.

In many ways, the Ultrabook category is still being defined as system builders look to put their stamp on the form factor. At a starting price of $1,400, HP just showed it isn't afraid to attack the premium market with a model that costs more than even Apple's MacBook Air, banking on buyers being enticed by better hardware and a "stunningly sleek" frame.

The Spectre Envy 14 will be available on February 8, 2012.

Samsung Announces Series 5 Ultrabooks and New Series 9 Laptops

Posted: 09 Jan 2012 03:03 PM PST

 

Add Samsung to the multitude of vendors announcing new Ultrabook models at CES this year. The company is entering the category with the Series 5 Ultra family, consisting of both a 13.3- and 14-inch model. The design of these thin, stylish portables is clearly influenced by the Series 9 laptop, which itself has undergone an update.

The 13.3-inch Series 5 Ultra weighs 3.24 pounds. It comes with a 1.6GHz Core i5 2467M, 4GB of RAM, and either a 500GB HDD or 128GB SSD. Its matte screen has a 1366x768 resolution. The laptop offers one USB 3.0 port, two USB 2.0 ports, full-size HDMI, an Ethernet port, and a 4-in-1 media reader. It's $900 with the HDD; $1,100 with the SSD.

 

The 14-inch Series 5 Ultra sports a 1.6GHz Core i5 2467M, 4GB of RAM, and a 500GB HDD. Notably, it's the first so-called Ultrabook to feature an optical drive. Nevertheless, it rings in at less than four pounds—3.94 pounds, to be exact. Like the 13.3-inch model, its matte screen is 1366x768. Its port selection, however, differs slightly, consisting of full-size HDMI, VGA, two USB 3.0 ports, one USB 2.0 port, and Ethernet port, and a 4-in-1 media reader. It's priced at $950.

Step up to a higher class of thin-and-light, and you get Samsung's Series 9 family—in both 13.3-inch and 15-inch sizes. The new Series 9 models are every bit as stylish and sophisticated as the originals that turned heads last year, but feature some subtle changes. For one thing, they're a bit thinner and lighter. Weighing 2.5 pounds and 3.5 pounds, respectively, the new Series 9 are just .5 inches at their thickest. They full-aluminum unibody construction, 1600x900 screens, backlit keyboards. The 13.3 inch model features a 1.6GHz Core i5 2467M with 4GB of DDR3 and a 128GB SSD. The 15-inch model has the same proc and SSD, but 8GB of DDR3. The price for the 13.3-inch model is $1,400; the 15-inch model is $1,500.

 

Netgear Announces 2TB Media Storage Router

Posted: 09 Jan 2012 02:52 PM PST

netgearWireless routers are not really the most sexy products these days, but Netgear is trying to change that with the just announced WNDR4700. This Media Storage Router has all sorts of goodies that go beyond the routing of network connections. The WNDR4700 comes with a 2TB hard drive and a ton of firmware features to pump up any home network.

The drive in the router is user-replaceable if you should need more storage down the road. Also included are two USB 3.0 ports that support mass storage and printers. There is support for PCs and Macs, with the latter having the option to use Time Machine for backups. Functionality can be extended with the addition of apps from the Netgear App Store, as well. 

Let's not forget that this is a router, and it's got all the bells and whistles in that department too. This is a dual-band 802.11n device capable of 450Mbps on the N standard. It runs on both 2.4 and 5GHz. Netgear has not revealed a price; that should come closer to the expected summer 2012 release.

The State of GPU Computing: Is the CPU Dead Yet?

Posted: 09 Jan 2012 02:27 PM PST

Massively parallel computing engines inside GPUs make them ideal for a wide range of tasks in addition to graphics. But where are the applications?

In the dark ages of PC gaming, the CPU took care of most of the graphics chores. The graphics chip did just the basics: some raster operations, dedicated text modes, and such seemingly quaint tasks as dithering colors down to 256 or 16 colors. As Windows took hold, the graphics equation began to shift a bit, with some Windows bitmap operations handled by "Windows accelerators." Then along came hardware like the 3dfx Voodoo and the Rendition V1000, and accelerated 3D graphics on the PC took off.

Now it's coming full circle. Today's GPUs are fully capable of running massively parallel, double-precision floating-point calculations. GPU computing allows the 3D graphics chip inside your PC to take on other chores. The GPU isn't just for graphics anymore.


The Fermi Die - GPU compute pioneer Nvidia advanced the cause with its Fermi architecture, which features 512 CUDA cores primed for computational chores.

GPU computing has its roots in an academic movement known as GPGPU, short for "general purpose computing on graphics processing units." Early GPGPU efforts were limited due to the difficulty of trying to get pre-DirectX 9 GPUs to work effectively with floating-point calculations. In the DirectX 11 era, GPU architectures have evolved, taking on some of the characteristics of traditional CPUs, like loops and branches, dynamic linking, and large addressable memory space, among others.

The new age of GPU compute is also more open. DirectCompute built into DirectX 11 supports all the major DirectX 11-capable hardware. OpenCL supports multiple operating system platforms, including mobile. We'll look at each of the major hardware manufacturers and APIs for GPU computing, as well as some applications that utilize the technology.

State of the Hardware

If we stick with GPU hardware, there are currently just two developers shipping GPU compute-enabled hardware: AMD and Nvidia. They'll soon be joined by Intel, however, with the integrated GPU in the upcoming Ivy Bridge CPU. Let's take a look at each of them in turn.

Nvidia: Tesla and CUDA

The first attempts at GPGPU used Nvidia GPUs. There were some early experiments with machine-vision applications that actually ran on very early GeForce 256‑series cards, which didn't even have programmable shaders. However, efforts began to blossom when DirectX 9's more flexible programmable-shader architecture arrived.

Nvidia took note of these early efforts, and realized that GPUs were potentially very powerful tools, particularly for scientific and high-performance computing (HPC) tasks. So the company's architects began to think about how to make the GPU more useful to general purpose programming. Until then, GPUs were great for graphics, but trying to write applications that were more general was difficult. There were no loops or returns, for example, and shader programs severely restricted the number of lines of code permitted.

Part of the issue, of course, was the lock DirectX 9 had on GPU hardware architecture. Back in the DirectX 9 era, any implementation of features to make life easier for non-graphics applications would be outside of the DirectX standard. Given the raw floating-point and single-intruction, multiple-data (SIMD) performance, however, graphics processors looked like good candidates for certain classes of supercomputing tasks.


The first iteration of Nvidia's CUDA GPU computing platform ran on the 8800 GTX.

In order to further the GPGPU movement, Nvidia created a more compute-friendly software development framework. CUDA 1.0, as Nvidia dubbed the architecture, was the first version of Nvidia's CUDA (Compute Unified Device Architecture) software platform. Programmers could now use standard C, plus Nvidia extensions, to develop applications, rather than have to work through the more cumbersome shader language process. In other words, general purpose apps didn't have to be written like graphics code. CUDA worked with 8800 GTX and related GPUs. That generation of graphics processors spawned the first products dedicated to GPU compute, the Tesla 870 line.

Since the early days of the 8800, Nvidia continued to build in architectural features to make the GPU a better general purpose programming tool. The goal isn't to make the GPU a replacement for the CPU. CPUs still excel at linear or small-scale multithreaded applications. However, GPUs are potentially excellent at large-scale parallel programming applications involving hundreds of threads operating on large volumes of separate but similar data. That programming model is ideal for a certain class of scientific and high-performance applications, including financial analysis.

It's significant that Nvidia positioned its latest Fermi architecture as a GPU compute platform before launching it as a graphics processor. The Fermi architecture brought substantial hardware enhancements to make it a better general purpose processor. These include fast atomic memory operations (which means a single memory location won't be corrupted by accesses from different functions), a unified memory architecture, better context switching, and more. Since Fermi's launch, Nvidia has also updated its CUDA software platform several times, which we'll discuss shortly.

Nvidia didn't just see GPU compute as something for oil exploration and academic computing. Nvidia acquired PhysX several years ago, discarding the dedicated hardware but keeping the broadly used physics API, so the GPU can accelerate physics calculations. The company has also worked with game developers to incorporate GPU compute into games, for water simulation, optical lens effects, and other compute-intensive tasks. Finally, it has worked with a number of mainstream companies like  ArcSoft, Adobe, and CyberLink to enable GPU‑accelerated video transcoding in both high-end and consumer-level video applications.

All the work of Fermi as a compute platform has paid off, as Nvidia's Tesla compute hardware sales topped $100M last year. Fermi doesn't get the attention that the desktop graphics or mobile processor divisions have been getting, but its existence has enabled Nvidia to remain at the top of the heap for GPU compute. Still, competitors are nipping at its heels.


AMD: The Mainstreaming of GPU Compute

AMD was a little late to the GPU compute party, but it has been working feverishly to catch up. ATI Stream was the company's equivalent to Nvidia's CUDA. The first AMD FireStream cards for dedicated GPU compute were the model 580s, built on the Radeon X1900 GPU, which saw fairly limited pickup. It wasn't until the Radeon HD 4000 series shipped that AMD really had competitive hardware for GPU compute. The HD 5000 improved on that substantially. The latest Radeon 6000 series has significant enhancements specifically geared for general purpose parallel programming.

Philosophically, though, AMD has taken a slightly different road. At first, the company tried to mimic Nvidia's CUDA efforts, but eventually discarded that approach and fully embraced open standards like OpenCL and DirectCompute. (We'll discuss the software platforms in more detail next.)


AMD is taking GPU computing mainstream by building in Radeon-class shader cores into the CPU die, as seen in this Fusion die shot.

Recently, AMD has shifted its GPU compute focus more to the mainstream. While AMD ships dedicated compute accelerators under the moniker FireStream, the company is trying to capitalize on its efforts to integrate Radeon graphics technology into mainstream CPUs. The Fusion APUs (accelerated processing units) are available in either mobile or desktop flavors. Even the high-end A3800, sporting a quad-core x86 CPU and 400 Radeon-class programmable shaders, costs less than $150.

AMD calls its approach to mainstream GPU compute App Acceleration. It's a risky approach, since the mainstream applications ecosystem isn't exactly rich with products that take advantage of GPU compute. The few applications that exist can run much faster on the GPU side of the APU, but the modest performance of the x86 side of the equation makes it difficult to compete with Intel's x86 performance dominance. AMD is betting that more software developers will take advantage of GPU compute, shifting the performance equation for the APUs.

Intel: Bridges to GPU Compute

Intel has been watching the GPU compute movement with some understandable concern. The company tried to get into discrete graphics with Larrabee, but that project died on the vine. The technology behind Larrabee is now relegated to limited use in some high-performance parallel compute applications, but you can't go out and buy a Larrabee board.

On the other hand, Intel has made waves with the integrated graphics built into its current Sandy Bridge CPUs. The Intel HD Graphics GPU is pretty average for Intel graphics, but the fixed-function video block is startlingly good. Video decode and transcode are very fast—even faster than most GPU-accelerated transcode. Of course, it's a fixed-function unit, so it isn't useful with non-standard codecs. But since a big part of the consumer GPU compute efforts from Nvidia and AMD focus on video encode and transcode, Sandy Bridge graphics stole a little thunder from the traditional graphics companies.


The GPU in Sandy Bridge is fairly mediocre—except for the fixed-function video engine, which is purely awesome.

Intel's upcoming 22nm CPU, code-named Ivy Bridge, may actually change the balance. The x86 CPU itself will offer modest enhancements to Sandy Bridge, but the GPU is being re-architected to be fully DirectX 11 compliant. When asked if GPU compute code could run entirely on the Ivy Bridge graphics core, the lead architect for Intel said it would. Performance is unknown at this point, but if Intel can couple a GPU core that's equal to the AMD GPU inside Fusion APUs with its raw x86 CPU capabilities, then it may signal a sunset on the era of entry-level discrete graphics cards.

The API Story

If you can't write software to take advantage of great hardware, you essentially have really expensive paperweights. Early attempts to turn GPUs into general purpose parallel processors were bootstrapping efforts, requiring programmers to figure out how to write a graphics shader program that would do something other than graphics.

As the hardware evolved, a strong need for standard programming interfaces became critical. What happened is a recapitulation of graphics history: proprietary technology first, then a steady shift to more open standards.

CUDA

Nvidia's CUDA platform was one of the first attempts to build a standard programming interface for GPU compute. Nvidia has always maintained that CUDA isn't really "Nvidia-only," but neither AMD nor Intel has really taken up the company's offer to accept it as a standard. Some of Nvidia's third-party partners, however, have chipped in, enabling support for Intel CPUs as fallback for some CUDA-based middleware.

CUDA started out small, consisting of libraries and a C compiler to write parallel‑processing code for the GPU. Over the years, CUDA has evolved into an ecosystem of Nvidia and third-party compilers, debugging tools, and full integration with Microsoft Visual Studio.

CUDA has seen most of its success in the HPC and academic supercomputing market, but CUDA has a broader reach than just deskside supercomputers. Adobe used CUDA in Adobe Premiere Pro CS4, and later to accelerate high-definition video transcode and some transitions. MotionDSP uses CUDA to help reduce the shaky‑cam effect in home videos. We'll highlight a few GPU‑accelerated apps later in this article.

ATI Stream

We'll just mention AMD's Stream software platform briefly, since AMD is no longer pushing it, choosing to focus instead on OpenCL and DirectCompute.

Stream was AMD's attempt to compete with CUDA, but the company obviously feels that the greater accessibility offered by standards-based platforms is more appealing.

DirectCompute

DirectCompute shipped with Microsoft's DirectX 11 API framework, so is available only on Windows Vista and Windows 7. It will also be available on Windows 8 when that OS ships. That means there's no support for DirectCompute on non-Microsoft operating systems. DirectCompute won't run on Windows XP, either, nor on Windows Phone 7 or the Xbox 360.

DirectCompute works across all GPUs capable of supporting DirectX 11. Today, that means only Nvidia GTX 400 series or later and AMD Radeon HD 5000 series or later. Intel will support DirectX 11 compute shaders when Ivy Bridge ships in 2012.

DirectCompute's key advantage is that it uses an enhanced version of the same shader language, HLSL, for GPU compute programming as it does for graphics programming. This makes it substantially easier for the large numbers of programmers already facile in Direct3D to write GPU compute code. It also runs across graphics processors from both AMD and Nvidia, giving it broad graphics hardware support.

On the downside, DirectCompute has no CPU fallback. So code specifically written for DirectCompute simply fails if a DirectX 11-capable GPU isn't available. That means programmers need a separate code path if they want to replicate the results of the DirectCompute code on a system running an older GPU.

OpenCL

Early on, OpenCL was developed by Apple, who turned over the framework to an open standards committee called Khronos Group. Apple retained the name as a trademark, but granted free rights to use it.

OpenCL runs on just about any hardware platform available, including traditional PC CPUs and GPUs inside mobile devices like smartphones and tablets. Care must be taken with code designed for multiplatform use, as a cell‑phone GPU may not be able to handle the same number of threads as gracefully as an Nvidia GTX 580. In fact, Intel has even released an OpenCL interface for the current Sandy Bridge‑integrated GPU.

Support for OpenCL has been quite strong. AMD is so enamored of OpenCL that it dropped its ATI Stream SDK in favor of a new Accelerated Parallel Processing SDK, which exclusively supports OpenCL. OpenCL has also come to the web. A variant of OpenCL, called WebCL, is in the prototype stage for web browsers, which allows JavaScript to call OpenCL code. This means you may one day run GPU compute code inside your browser.

On the other hand, OpenCL is still in its infancy. Supporting tools and middleware are still emerging, and for the time being developers may need to create their own custom libraries, instead of relying on commercially available or free middleware to ease programming chores. There's no integration yet with popular dev tools like Microsoft's Visual Studio.


The API Wars

The GPU compute API situation today resembles the consumer 3D graphics API wars of the late 1990s. The leading development platform is CUDA. Despite Nvidia's protestations to the contrary, CUDA remains a proprietary platform. It has a rich ecosystem of developers and applications at this stage, but history hasn't been kind to single-platform standards over the long haul.


This chart sums up the state of the GPU compute APIs in a nutshell.

You could argue that DirectCompute is also proprietary, since it's Windows-only—and even lacks support on pre-Vista versions of Windows. However, Windows is by far the leading PC operating system, and DirectCompute supports all existing DirectX 11–capable hardware. That's where the support ends, however, since there's no version for mobile hardware, though we may see that change with Windows 8.

OpenCL offers the most promise in the long run, with its support for multiple operating systems, a wide array of hardware platforms, and strong industry support. OpenCL is the native GPU compute API for Mac OS X, which is gaining ground in the PC space, particularly on laptops. But OpenCL is still pretty immature at this stage of the game. There's a strong need for integration with popular development platforms, more powerful debugging tools and more robust third-party middleware.

The Applications Story

To see what kind of strides GPU compute has made, we're going to focus on consumer applications, not scientific or highly vertical applications. GPUs should do well in applications where the code and data are highly parallel. Examples include some photography apps, video transcoding, and certain tasks in games (that aren't just graphical in nature.)

Musemage

Musemage is a complete photo editing application available from Chinese developer Paraken. When running on systems with Nvidia GPUs, Musemage is fully GPU accelerated. Musemage uses the CUDA software layer to accelerate the full range of photographic operations.


Musemage is the first photo editing application to be fully GPU accelerated.

Musemage lacks a lot of the automated functions built into more mature tools like Photoshop, but if you're willing to manually tweak your images, most of the filters and tools act almost instantly, even on very large raw files—provided you've got Nvidia hardware.

Adobe Premiere Pro CS5/5.5

Adobe's Premiere Pro is a professional-level video editing tool. One of the tasks necessary for any video editor is previewing projects as you assemble clips, titles, transitions and filters into a coherent whole. Adobe's Mercury playback engine uses CUDA to accelerate the preview. This is incredibly useful as projects grow in size—you're able to scrub back and forth on the timeline in real time, even after making changes.

In addition, a number of effects and filters are GPU accelerated, including color correction, various blurs, and more. A complete list can be found at the Adobe website.

Adobe is investigating porting the Mercury engine and other GPU-accelerated portions of Premiere Pro to OpenCL, but we haven't heard whether a final decision has been made. Given the relative immaturity of the tool sets and drivers, OpenCL may need a little more time before major software companies like Adobe commit to the new standard.

Interestingly, Intel has recently delivered a plugin for Premiere Pro CS5.5 that can speed up HD encoding if you use Adobe Encoder. It does require an H67 or Z68 chipset. With a Z68 system, you can use an Nvidia-based GPU to accelerate the Mercury playback engine and QuickSync to perform the final render.

Video Conversion

A number of video transcoding apps exist that are GPU accelerated. One of the first was CyberLink's Media Espresso, which first used Nvidia's CUDA framework, then OpenCL. The latest version of Media Espresso takes advantage of Intel's QuickSync. Transcoding with QuickSync can be faster than using a GPU, but only if you use a QuickSync-supported codec.

Higher-end tools, like MainConcept, also use GPU encode. MainConcept offers separate H.264/AVC encoders for Nvidia, running on CUDA, and AMD, which uses OpenCL.

Games

When we think of games and GPUs, it's natural to think about graphics. But games are increasingly using the GPU for elements that aren't purely graphical. Physics is the first thing that comes to mind. Usually when we think of physics, we think of collisions and rigid body dynamics.

But physics isn't just about stuff bouncing off other stuff. Film effects like motion blur and lens effects like bokeh and volumetric smoke are handled via GPU compute techniques rather than run on the CPU. GPU compute also handles cloth simulations, better-looking water, and even some audio processing. In the future, we might see some of the AI calculations offloaded to the GPU; AMD already demonstrated GPU-controlled AI in an RTS-like setting.

As more GPU compute capability is integrated into the CPU die, it's possible for the on-die GPU to handle some of these compute tasks while the discrete graphics card takes care of graphics chores. The ability for the on-die GPU and CPU to share data more quickly—without having to move data over the PCI Express bus—may make up for the fewer shader cores available on-die.

Parallelism is the Future

CPUs will never go out of fashion. There will always be a need for linear computation, and some applications don't lend themselves to parallel computation. However, the future of the Internet and PCs is a highly visual one. Digital video, photography, and games may be the initial drivers for this, but the visual Internet, through standards like WebCL and HTML5 Canvas, will create more immersive experiences over the web. And much of the underlying programming for creating these experiences will be parallel in nature. GPUs, whether discrete or integrated on the CPU die, are naturals for this highly visual, parallel future. GPU computing is still in its infancy.

New Android App Helps You Spot SOPA-Supporting Products

Posted: 09 Jan 2012 02:26 PM PST

sopa appUp in arms about the Stop Online Piracy Act (SOPA)? Well, you're not even close to alone, and a new Android app can help the more passive opponents do their part to express their rage. The Boycott SOPA app allows users to leverage their phone's camera to make sure they aren't buying any products that come from companies supporting SOPA.

Other barcode scanning apps just tell you how much a product is selling for online, but Boycott SOPA measures something different. Just scan a barcode, and Boycott SOPA reaches out to its product database and lets you know whether or not the item comes from a company that supports the bill. If the answer is 'yes,' the app will pop up a warning that the item is "intimately related to a SOPA supporting company."

There is a history screen with a list of all your analyzed products with an icon reminding you how it fared. This might not stop Congress from  making SOPA or Protect IP law, but you can at least put a little hurt on those who don't stand for the stability and freedom of the Internet.

Fusion Garage Finally Dead, Says Leaked Doc

Posted: 09 Jan 2012 02:07 PM PST

gridAfter an uncertain few months, it looks like JooJoo/CrunchPad maker Fusion Garage is going under. According to a leaked document sent to Business Insider, creditors are preparing to force the company into liquidation. The total owed to investors by FusionGarage is said to be in the neighborhood of $40 million. 

Fusion Garage was originally set to be the OEM that brought then-TechCrunch-editor Mike Arrington's vision of a $200 web-only tablet to life. The two had a falling out, and the renamed JooJoo ended up being a flop when Fusion Garage released it in early 2010. Then just a few months ago, Fusion Garage reemerged with a new platform based on Android along with plans for tablets and phones. The Grid 10 tablet was another monumental failure. 

At the tail end of 2011, there were rumors that Fusion Garage has closed down without communicating to customers, or its partners. The company CEO later said they were simply trying to get the business in order, but now it looks like that effort has failed. No word on what will happen to Fusion Garage's few customers, and the money they've spent.

SD Association Rolls Out Wireless SD Card Standard

Posted: 09 Jan 2012 11:36 AM PST

It's increasingly becoming a wireless world, folks. Just check out the headlines from the past week or so. On top of the omnipresent smartphone/tablet chatter, we saw the launch of next-gen "5G Wi-Fi" chips capable of streaming 1080p video without a hitch, and now, today's news: even your SD card is going wireless. Seriously.

The SD Association's new Wireless LAN SD standard mixes storage with wireless IEEE 802.11 a/b/g/n, according to the organization's press release (PDF). No longer will you need to whip out a USB cable or yank your SD card out of your phone or camera to transfer files back and forth between your device and your computer; it's all over the air, baby. The same press release tosses out a few ways users will be able to use the new standard to their advantage:

  • Upload family, vacation or sports photos and video wirelessly from a camera or video camera to a computer or server for sharing or backup.
  • Wirelessly download pictures from servers with cameras and video cameras using Wireless LAN SD memory cards. Consumers can share pictures and videos without email or physical card transfers, including peer-to-peer picture and video transfers from cameras to smart phones and tablets wirelessly without an access point.
  • Use Wireless LAN SD memory cards as wireless control points for other devices, such as TVs, in a home network.

Nifty, eh? The new standard can apply to both full-sized and micro SD/SDHC/SDXC cards, but there's no word on when we'll see the first ones. It'll probably be a while, though. If you need your wireless SD fix today, Eye-Fi's been doing it for a while now.

New "Crestron Connected" Initiative Brings Plug-And-Play Cloud Control To Home Theater

Posted: 09 Jan 2012 11:16 AM PST

Who needs a universal remote? Actually, we do – juggling receiver, TV, Xbox 360 and Blu-ray player controllers is a pain in the ass – but if a company called Crestron has its way, universal remotes may just become a thing of the past. The company is working with consumer electronics manufacturers to roll out its newly announced "Crestron Connected" standard, which allows users to monitor and control their Creston Connected devices from anywhere in the world using a web-based interface.

Creston's targeting home theater products like Blu-ray players and HDTVs for its new control platform. Creston Connected devices will have Ethernet ports for quick n' easy network integration – no word on Wi-Fi capabilities, however. All Creston Connected devices will also be fully plug and play with automatic setup. Controlling and managing your devices requires the company's Fusion RV software, which is available for iOS and Android as well as traditional PCs.   

"We need to evolve from a traditional hardwired, centralized control scheme to a distributed, cloud-based architecture," VP Fred Bargetzi said in the company's press release. "Crestron Connected is an important first step toward enabling different products to talk together to create smarter homes and buildings. This new technology allows for easy, fast, and affordable implementation regardless of the size and complexity of your environment."

If it ends up being widely adopted, Creston Connected could wind up eliminating the need to have dozens of different home control apps for your various electronics and services. Whaddaya say – is Creston Connected intriguing?

Image credit: electronichouse.com

OCZ Shows Off Everest 2 SSD Controller With Improved Write Speeds

Posted: 09 Jan 2012 10:41 AM PST

After OCZ snatched up SSD controller-maker Indilinx back in March of 2011, it took them nearly nine months to work the company's speedy new Everest controller into an actual product. (The Everest-sporting OCZ Octane launched back in the beginning of November.) It's going to take them less time than that to roll out an Everest update; at CES, OCZ is showing off its new Everest 2 controller, which doubles up on the first-gen's random IOPS performance and should hit the streets in June.

According to Anandtech, OCZ's claiming that the Everest 2 will hit 550MB/s read, 500MB/s write and a whopping 90K 4KB random write IOPS. We're assuming that's via SATA 3.0; for comparison, the OCZ Octane's claimed SATA 3.0 numbers are 560MB/s read, 400 MB/s write and 45k random IOPS. Indilinx clearly spent time focusing on write speeds, and Everest 2 achieves the higher numbers thanks to a brand-spankin' new firmware architecture. It sounds impressive and hey, now Flag Day's not the only thing we have to look forward to in June.

AMD Radeon HD 7970 Cards Go On Sale Today

Posted: 09 Jan 2012 10:16 AM PST

It's CES time! You know what that means: a ton of new, awesome looking tech is going to be unveiled this week, some of which will never actually see the light of day, and the things that actually end up launching won't hit the streets for a while yet. Before we dive too deeply into the future, let's take a look at something that's actually in the here and now. Today, Radeon HD 7970 graphics cards actually started shipping. Early adopters rejoice!

In case you missed it, we've already taken a look at the AMD Radeon HD 7970 and found it pretty impressive. So far, Gigabyte, Club 3D, XFX, Asus, PowerColor and HIS all have 7970 cards up for sale on Newegg.  Actually, those companies had models up for sale on Newegg; every single one is already out of stock on that site, even with their $550 to $600 price tags. A quick Google Shopping check shows that cards are still available elsewhere on the web, but usually carry even higher sticker costs. 

So are you in the market for a Radeon HD 7970? Know anywhere you can pick one up? Let us know in the comments!

Total Pageviews

statcounter

View My Stats