General Gaming Article

General Gaming Article


HTC Vive VR Headset Takes Page from HoloLens, Gains Front Facing Camera

Posted: 05 Jan 2016 05:28 PM PST

Blending real and virtual worlds

HTC Vive Pre

Had things gone exactly as planned over at HTC, the handset-maker-turned-VR-player would have released a retail version of its Vive virtual reality headset before 2016 rolled around. That didn't happen, as HTC decided to delay the launch to better implement a "very, very big technological breakthrough." We now know what that breakthrough is.

Simply put, it's a front-facing camera. That's the big new feature HTC is promoting with its newly unveiled Vive Pre, a second generation VR headset for developers and possibly the last one before a retail unit manifests.

The front-facing camera merges physical landscapes and objects with virtual worlds. It also allows the wearer to see elements of the real world in front of them without having to remove the headset.

"A newly developed front facing camera allows you to do more both inside and outside your Virtual world by blending physical elements into the virtual space. Being able to take a seat, find your drink, and carry on conversations without removing your headset is only the beginning of what's possible," HTC explains.

It reminds us of what Microsoft is doing with HoloLens. Admittedly, it makes the Vive a more interesting product, and that's saying something since we're already intrigued by it.

The Vive Pre also brings about a more compact design with updated features for comfort and usability. It has a strap design that's supposed to make the headset more stable and balanced, the displays are brighter, and interchangeable foam inserts and nose gaskets ensure a snug and secure fit.

HTC said it also overhauled the controllers with updated ergonomics such as softer edges, new textured buttons, and grip pads for a more comfortable feel in the hand. There's a dual-stage trigger and haptic feedback for better interaction with objects, and integrated rechargeable batteries to keep the fun going for over 4 hours at a time on a single charge.

The introduction of the Vive Pre doesn't mean another delay, HTC is simply giving developers time to play with the new front-facing camera. Barring any changes, HTC is still planning a retail launch in April of this year.

Follow Paul on Google+, Twitter, and Facebook

AMD’s Polaris Shoots for the Stars

Posted: 05 Jan 2016 03:14 PM PST

AMD RTG Polaris Slide 03

The North Star

In the final portion of our RTG Summit coverage, AMD has saved the best for last. Part one of the summit covered displays and visual technologies, part two was about software and AMD's push to become more open (as in open source), and now it's time to look to the North Star and find out what AMD is planning in the realm of GPU hardware.

We've all known for a while that 14/16nm FinFET process technology is coming to GPUs, and in December RTG was happy to show us their first working silicon. This is a big change from the "old AMD" where we would often get very few details about a new product prior to launch. This time, AMD is providing some high level details of their next generation GCN architecture (is that redundant—next generation Graphics Card Next?), well in advance of the expected launch date.

And we may as well cut straight to the chase and let you know that Polaris isn't slated to launch until around the middle of the year, so in about six months. Which isn't too surprising, considering the cadence of GPU launches, but if you were hoping to upgrade right now, you'll have to postpone things a bit or stick with existing products.

There's something else to discuss as well, and that's the positioning of the Polaris part we were shown. Basically, it's AMD's entry-level GPU, rather than a high-end competitor; so again, you might need to wait longer if you're hoping to get something faster than a Fury X. Of course, just because AMD was demonstrating their entry-level Polaris part, that doesn't mean they can't do midrange and high-end launches in the same time frame, but we would look more toward the fall for the high-end product launch.

But what does 14nm FinFET mean, what other Polaris chips might we see, and what new technologies are being baked into the fourth generation GCN architecture (which we'll call GCN4, though admittedly it looks more like GCN 1.3)? Let's dig into the meat of the announcement and talk about some of the cool and interesting things that are coming.

14/16nm FinFET for GPUs

We already talked about the demonstration of working 14nm FinFET silicon back in December. Of course, working in a controlled demo environment isn't necessarily the same as 100 percent working for anything you might require; presumably there will be some tweaks and refinements to the drivers and hardware over the coming months as we approach the retail launch. But what is 14nm FinFET and why does it even matter?

AMD RTG Polaris Slide 07

This slide from AMD shows the past decade of graphics process nodes. We went from yearly half-node updates in 2005-2007 to bi-annual full-node updates from 2007-2011…and then we had to wait. There was supposed to be a half-node 20nm update in 2013, which was eventually pushed to 2014, but as you can see, that never happened.

At a high level, one of the big problems with GPUs during the past few years has been the reliance on 28nm manufacturing technology. AMD and Nvidia have used the same core process for five years—a virtual eternity in the rapidly evolving world of computers! The issues consisted of several interrelated elements. First, as noted above, TSMC's 20nm production was delayed—initially, the plan would have been to launch the new process node about two years after 28nm came online. Once 20nm was ready for production, however, the GPU manufacturers—AMD and Nvidia—found that they just weren't getting the scaling they expected, which ultimately led to both companies electing to stick with 28nm until the next generation process node was ready.

The reason for the wait is that the next node would move to FinFET, and FinFET helps tremendously with one of the biggest limiting factors in GPUs (and processors in general): leakage. Traditional planar transistors stopped scaling very well beyond 28nm, so even though everything got smaller, leakage actually got worse, with the result being that a 20nm GPU may not have performed much better than a 28nm GPU. And since one of the limiting factors in GPU performance has been power requirements—no one really wants to make a GPU that requires more than 250-275W—we hit a wall.

The Fury X used a CLC to stay cool
The Fury X used a CLC to stay cool

We've seen evidence of this wall from both AMD and Nvidia during the past year. Look at the GTX Titan X (3072 CUDA cores at 1000MHz) and the GTX 980 Ti (2816 cores at 1000MHz), and you'll find that the 980 Ti is typically within one or two percent of the Titan X, even though the latter has nine percent more processing cores. Either the GPUs are memory bandwidth limited, or they're running into the 250W power limit—effectively yielding roughly the same performance from both GPUs, despite having more resources. The same thing happened with the R9 Fury X and R9 Fury, where the Fury X has 14 percent more shaders available (4096 vs. 3584 cores) but in terms of performance the Fury X is typically only six percent faster.

So far, we've been talking a lot about 14nm FinFET, which means Samsung's process technology, which was licensed by GlobalFoundries—and GF is the manufacturer of the working Polaris chip we were shown. TSMC is still around, however, only they call their current process 16nm FinFET. There are almost certainly difference between the Samsung and TSMC solutions, but at one point one of AMD's senior fellows said something to the effect of, "Don't get too caught up on the 14nm vs. 16nm aspect—they're just numbers and there may not be as many differences as the numbers would lead you to expect."

What we know right now is that the low-end Polaris parts will be manufactured by GF (and maybe Samsung?) But RTG was also quick to note that they will still be manufacturing parts with TSMC "when it makes sense." We don't expect to see two variants of a single GPU—14nm and 16nm—but it does sound as though AMD will have some of their next-generation GPUs produced by TSMC. Which would be a lot of extra work validating a design for two difference processes, unless 14nm and 16nm are far similar than we think. Anyway, we don't have any details on other AMD Polaris GPUs yet, and we really don't even have too many details on Polaris, but it will be interesting to see how AMD rolls out the new GPUs during the coming year.

AMD RTG Polaris Slide 09

More FinFETs

Getting back to the FinFET discussion, if we leave the world of GPUs behind, we've seen evidence of 20nm scaling issues elsewhere as well. Smartphones and tablets showed some tremendous performance improvements when they moved from 40nm to 28nm, giving us improved performance and battery life. The move from 28nm to 20nm on the other hand was far less impressive. The good news is that smartphones and tablets frequently don't run heavy workloads, so things like power gating and voltage islands helped…but under load, when the transistors are active, leakage is still an issue. That's perhaps one of the reasons why 20nm SoCs in smartphones have had more problems with overheating than earlier models.

AMD RTG Polaris Slide 10

That brings us to FinFET, and what it does to fix the leakage problem. The original idea came in 1989, by Hisamoto, who called it the "Delta FET" because it looked a bit like a delta wing. The name FinFET caught on in 2000 when UC Berkeley did some additional work on the idea, referring to the "wing" as a "fin" instead. The fin part of the name comes from the fin-like structure that rises above the substrate, which the transistor gate wraps around, giving far more surface area and control over the gate.

The first production FinFET devices came in 2012, when Intel rolled out Ivy Bridge with their 22nm FinFET process. It's more difficult to manufacture FinFET than planar transistors, but the tradeoff between cost and performance eventually made it a requirement. Just like AMD and Intel (and others) moved from aluminum to copper-based transistors about ten years back in an effort to reduce leakage and improve performance—and added SOI, or Silicon on Insulator as an added bonus—FinFET is the way forward for sub-20nm process technology.

AMD RTG Polaris Slide 11

RTG also presented the above slide, showing that not only does FinFET allow for improved performance, but it also reduces the variation among transistors. There's still variation, of course, but a processor can only run as fast as its weakest link—the top-left corner of each "blob" in the slide. So if you can decrease the leakage range and improve the minimum clock speed, you end up with a faster chip.

AMD RTG Polaris Slide 13

Overall, FinFET ends up being a tremendous benefit to processor designers. It allows for a reduction in power—or more performance at the same power, thanks to the performance per watt improvements. This will allow for GPUs that are more power friendly, including even thinner and lighter gaming notebooks. Or it can be used to make a mainstream GPU that doesn't require any extra power connectors (e.g., the GTX 750 Ti). Or a chip can include even more cores than before and provide better performance. RTG mentioned around a 2X improvement in perf/watt at one point, which potentially means a doubling of performance on the fastest parts; we'll have to wait and see if we get that big of a boost, but it's at least possible.

AMD RTG Polaris Slide 03

Polaris Features and Overview

All of the technology that goes into modern processors is certainly exciting, but in the end we come back to where wheels touch pavement. Having a hugely powerful engine that results in a burnout doesn't do you much good in a race, just as having the best tires in the world won't do you much good if you have a weak-sauce engine and transmission. The name of the game is balance: You have to have an architecture that's balanced between performance, power requirements, and memory bandwidth. For AMD and RTG, that architecture is Polaris.

AMD RTG Polaris Slide 04

AMD RTG Polaris Slide 05

Fundamentally, Polaris is still a continuation of the GCN architecture. We don't know if that means we'll still have 128 cores per SM/Compute Unit, but it seems likely. One thing that Polaris will do is to unify all the Polaris GPUs under a single umbrella, giving us a unified set of features and hardware.

We're calling the new architecture GCN4, mostly to give us a clean break from the existing GCN1.0/1.1/1.2a/1.2b hardware. Most of those names are "unofficial," but they were created due to AMD changing the features in succeeding generations of GCN hardware. Ultimately, that lead to "GCN1.2" designs in Tonga and Fiji that weren't quite the same—Fiji has an updated video encoder/decoder block, for example. All Polaris GPUs meanwhile should inherit the same core features, detailed above.

As far as new items go, we have very little to go on right now. There's a new primitive discard accelerator, which basically entails rejecting polygons earlier so the GPU doesn't waste effort rendering what ultimately won't appear on the screen. There's also a new memory compression algorithm, refining what was first introduced in Tonga/Fiji. HDMI 2.0a and DP 1.3 will all be present, and H.265 Main10 decode of 4K and 4Kp60 encode will also be present; both of these tie into the earlier presentation on display technologies. And then there are the scheduler, pre-fetch, and shader efficiency bullet points.

It's not clear how much things are changing, but much of the low-hanging fruit has been plucked in the world of GPUs, so these are likely refinements similar to the gains we see in each generation of CPUs. We asked RTG about the gains we can expect in performance due to the architectural changes as opposed to performance improvements that come from the move to a new FinFET process; it appears FinFET will do more to improve performance than the architectural changes.

This is both good and bad news. The good news is that it means all of the existing optimizations for drivers and the like should work well with GCN4 parts. And let's be clear, GCN has been a good architecture for AMD, generally keeping them competitive with Nvidia. The problem is that ever since Nvidia launched Maxwell in the fall of 2014, they've had a performance and efficiency advantage. The bad news is that if GCN4 isn't much more than a shrink and minor tweaks to GCN, we can expect Nvidia to do at least that much with Pascal as well, and if Pascal and Polaris continue the current trend, AMD will end up once again going after the value proposition while Nvidia hangs on to the crown. But we won't know for sure until the new products actually launch.

AMD RTG Polaris Slide 16

Showcase Showdown

We'll wrap up with a quick discussion of the hardware demo that AMD showed. Unfortunately, we weren't allowed to take pictures or actually go hands-on with the hardware, and it was a very controlled demo. In it, AMD had two systems, using i7-4790K processors—yeah, that's how bad AMD's CPUs need an update; they couldn't even bring themselves to use an AMD CPU or APU. One system had a GTX 950 GPU and the other had the unnamed Polaris GPU. Both systems were running Star Wars: Battlefront at 1080p Medium settings. V-Sync was enabled, and the two systems were generally hitting 60FPS, matching the refresh rate of the monitor.

While performance in this test scenario was basically "equal", the point was to show an improvement in efficiency. Measuring total system power (so we're looking at probably 40-50W for the CPU, motherboard, RAM, etc.), the Nvidia-based system was drawing 140-150W during the test sequence and the AMD-based system was only using 85-90W.

What's particularly impressive here is that we've built plenty of systems with the i7-4790K since it launched. Most of those systems idle at around 45-50W. The GTX 950 is a 100W TDP part, and with V-Sync it was probably using 60-75W with the remainder going to the rest of the system. To see a system playing a game at decent settings and relatively high performance while only drawing 86W is certainly impressive, as it means the GPU is probably using at most 35W, and possibly as little as 20W.

AMD RTG Polaris Slide 14

RTG discussed notebooks more than desktop chips at the summit, and if they can get GTX 950 levels of performance into a 25W part, laptop OEMs will definitely take notice. We haven't seen too many AMD GPUs in laptops lately—outside of the upgraded MacBook Retina 15 with an R9 M370X, naturally—and we'd love to see more competition in the gaming notebook market. Come June, it looks like AMD could provide exactly that. Let's just hope we see some higher performance parts as well, as Dream Machine 2016 looms ever nearer….

Logitech Announces G502 Proteus Spectrum

Posted: 05 Jan 2016 01:28 PM PST

G502 Proteus Spectrum

Logitech announced its an update to its popular G502 gaming mouse Tuesday, just a day before the Consumer Electronics Show opened up its doors.

The newer G502, dubbed the Proteus Spectrum, is an update to the old Proteus Core gaming mouse. The mouse carries over most of the core features of the G502, but introduces RGB LED lighting that offers 16.8 million colors. Color combinations and settings can be coordinated with other Logitech gaming products in Logitech's Gaming Software. The lights can also be turned off completely for those who prefer to have a more subtle appearance.

The mouse still features 11 programmable buttons and five removable weights. The mouse also carries over the the PMW3366 sensor, which offers a range of 200 to 12,000 dpi. Like the Proteus Core, the mouse's button and sensor settings are stored in on-board memory. This allows the mouse to be set once and reused on other PCs without the need to install Logitech Gaming Software on those machines.

We had a chance to play with the G502 briefly before the December holidays, and we'll be reviewing our unit soon. Logitech expects the mouse to be available in January 2016, and is currently taking pre-orders. MSRP is listed at $80.

Newegg Daily Deals: ASRock Fatal1ty Gaming Z170 Motherboard, LG BD Burner, and More!

Posted: 05 Jan 2016 11:27 AM PST

ASRock Motherboard

Top Deal:

It used to be that you could reasonably judge how powerful a PC was based on its physical footprint. Not anymore. Today it's possible to build a high end system using teeny, tiny parts, and if that's what you want to do, one place to start is today's top deal -- it's for an ASRock Fatal1ty Gaming Z170 Gaming-ITX/ac LGA 1151 Intel Z170 SATA 6Gb/s USB 3.1 USB 3.0 Mini ITX Intel Motherboard for $150 with $3 shipping (normally $209; additional $10 Mail-in rebate). Though small in stature, this mini-ITX motherboard can be the foundation of a system that delivers big performance with support for Skylake CPUs, up to 32GB of DDR4-3400+ memory, a single PCI-Express x16 graphics card, and more.

Other Deals:

Intel Core i5-4460 Haswell Quad-Core 3.2 GHz LGA 1150 BX80646I54460 Desktop Processor Intel HD Graphics 4600 for $180 with free shipping (normally $190 - use coupon code: [EMCKNPK22])

LG Black 14X BD-R 2X BD-RE 16X DVD+R 5X DVD-RAM 12X BD-ROM 4MB Cache SATA BDXL Blu-ray Burner, Bare Drive, 3D Play Back for $40 with free shipping (normally $44)

G.Skill Ares Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory for $30 with free shipping (normally $34 - use coupon code: [EMCKNPK24])

XFX Radeon R9 390X 8GB 512-Bit GDDR5 PCI Express 3.0 CrossFireX Support Video Card for $390 with free shipping (normally $420; additional $30 Mail-in rebate; $20 promotional gift card w/ purchase, limited offer)

Patriot Rains Down Hellfire Drives onto PCI-Express SSD Market

Posted: 05 Jan 2016 11:09 AM PST

Patriot's first PCIe-based SSDs

Patriot Memory

Now that storage players have squeezed about as much performance as possible out of SATA 6Gbps-based solid state drive solutions, the attention is turning to PCI-Express. Some are already there, and others, like Patriot, are getting ready to release their first PCIe-based SSDs.

Patriot is kicking off the PCIe party with its new Hellfire line. It will come in two versions, M.2 PCIe and as a PCIe add-in card (AIC).

The M.2 NVMe Hellfire is powered by a Phison 5007 controller and multi-level cell (MLC) NAND flash memory. It will come in 240GB, 480GB, and 960GB capacities with rated read and write speeds of up to 2,500MB/s and 600MB/s, respectively.

As for the AIC model, it's positioned as Patriot's top-end storage device with rated read and write speeds of up to 3,000MB/s and 2,200MB/s, respectively. Like the M.2 NVMe model, the AIC version uses MLC NAND flash memory that's paired with the Phison 5007 controller. It also will be available in 240GB, 480GB, and 960GB capacities.

"We are very excited to get into the PCIe storage space," said Les Henry, VP of Engineering at Patriot. "With the launch of Intel's latest Skylake Processor, we are seeing more motherboards available in the consumer market that support PCIe devices. Along with the launch of Microsoft Windows 8.1 and Windows 10, which supports PCIe storage devices without the need for additional drivers, we feel this will be the future trend and will allow users to take full advantage of the PCIe storage speeds."

Patriot didn't have pics to share of the new drives, nor is it ready to reveal pricing (our own Jimmy Thang will be meeting with Patriot later this week at CES and perhaps will be able to coax some additional info out of the company). However, the company did say the drives will be available towards the end of the first quarter.

Follow Paul on Google+, Twitter, and Facebook

CyberPower Readies First 2-in-1 Desktop for Gamers and Streamers

Posted: 05 Jan 2016 10:17 AM PST

Dual function desktop

CyberPowerPC Pro Stream

CyberPowerPC has a handful of new products it's showcasing at CES, one of which is the Pro Streaming, an intriguing desktop that it's billing as the world's first 2-in-1 gaming and streaming system.

The 2-in-1 form factor is typically associated with tablets that pull double duty as laptops, like Microsoft's Surface Pro line and any number of detachables on the market. But CyberPowerPC's using the term to describe a new series of gaming PCs that feature "two independent systems in a single chassis for hyper efficiency, high peformance gaming, and lag-free high-bitrate broadcast streaming."

Pricing for the Pro Streamer will start at $1,899, which includes an Intel Core i7-6600K processor paired with an Nvidia GeForce GTX 970 graphics card for gaming duties, and a Core i3-6100 configuration for streaming chores.

CyberPowerPC says its Pro Streamer models will come pre-installed and configured with the OBS open source streaming and recording program and XSPLIT. The idea is throw separate resources at gaming and streaming within a single system.

In addition to the Pro Streamer, CyberPowerPC is readying a fully customizable 34-inch curved all-in-one called Arcus. It's the second such system we've heard of this week, the other one coming from Maingear.

Details are light -- it will come with air and liquid cooling options, a custom USB webcam with dual mikes, dual 2.5-inch HDD/SSD storage options, and standard DIMM memory.

Finally, CyberPowerPC also announced new notebook models, including a new 17.3-inch Fangbook 4 SK-X with G-Sync starting at $1,885, and Vector (17.3-inch) and Tracer (15.6-inch) laptops, all of which are Skylake-based systems with Nvidia graphics.

Follow Paul on Google+, Twitter, and Facebook

Linksys Unveils Lineup of 802.11ac Wave 2 MU-MIMO Wi-Fi Routers

Posted: 05 Jan 2016 09:48 AM PST

A better Wi-Fi experience

Linksys EA9500

Linksys knows a thing or two about high end routers -- it's WRT1900AC is still one of the best models on the market -- so we're not surprised that it's rolling out a couple of new models under its new Max-Stream lineup of MU-MIMO Wi-Fi products.

Before we get to the new gear, let's talk a moment about MU-MIMO, or Multi-User, Multiple-Input, Multiple Output technology. As its name implies, MU-MIMO routers can serve data to more than one user at the same time.

At this point you might thinking, "Pfft, my router already does that, I surf the web while the kids watch Netflix." The reason that's possible on routers that don't support MU-MIMO technology is because they're fast at constantly switching between devices. Even still, if your router doesn't support MU-MIMO, it's always talking to just one device at a time even though it might feel otherwise.

MU-MIMO routers can communicate with and transfer data to and from multiple devices simultaneously. It's an important feature that's part of the 802.11ac Wave 2 spec, as today's home are becoming increasingly connected with more and more devices, like smartphones, tablets, desktops, laptops, smart TVs, IoT gadgets, and so forth.

"Our new line-up of Linksys MU-MIMO solutions provide the networking backbone to allow consumers to enjoy high-performance and simultaneous Wi-Fi, including speed, range, and coverage," said Justin Doucette, director of product management, Linksys. "With the rise of 4K streaming and the growth of MU-MIMO clients, having the latest MU-MIMO technology is the best way to ensure users get the best Wi-Fi experience possible."

Moving on, Linksys just unveiled a pair of new Max-Stream routers with MU-MIMO support, the Max-Stream AC1900 Dual-Band MU-MIMO Gigabit (EA7500) and Max-Stream AC5400 Tri-Band Wi-Fi (EA9500).

The faster of the two (EA9500) sports a 1.4GHz dual-core processor, tri-band Wi-Fi (1,000Mbps on the 2.4GHz band plus two 5GHz bands, each capable of up to 2,166Mbps), eight LAN ports, 1 WAN port, 8 external antennas, and USB 3.0 and 2.0 ports (one each).

Look for the EA9500 to be available in April for $400 MSRP.

As for the EA7500, it's half the price at $200 MSRP (available next month) and includes a Qualcomm IPQ 1.4GHz dual-core processor, up to 600Mbps on the 2.4GHz band, up to 1,300Mbps on the 5GHz band, four LAN ports, and both USB 3.0 and 2.0 ports.

Linksys also has on tap a Max-Stream AC1900+ MU-MIMO Wi-Fi Range Extender (RE7000) and a Max-Stream AC600 USB MU-MIMO Adapter, both available in the Spring for $150 and $60, respectively.

Follow Paul on Google+, Twitter, and Facebook

It's Official, Windows 10 Devices Top 200 Million Mark

Posted: 05 Jan 2016 08:53 AM PST

Off to a fast start

Windows 10 Wallpaper

Just prior to the New Year, the unofficial word on the web was that Windows 10 had extended its reach to over 200 million devices. Well, that figure is now official, as Microsoft on Monday provided an update with some interesting side data.

First, let's dispel the myth that Microsoft's figure is inflated by counting users who upgraded to Windows 10 and then rolled back to Windows 8/8.1 or Windows 7. That's isn't the case. In no uncertain terms, Microsoft says that "as of today, there are more than 200 million monthly active devices around the world running Windows 10."

One could argue that a monthly active count is a snapshot that could still be off by downgrades that wouldn't be factored in until the next month, but those would be offset by upgrades (at least partially).

To the point, Windows 10 is gaining ground without any number counting shenanigans. To drive the point home, Microsoft says that engagement on Windows 10 is the highest of any Windows version ever, with Windows 10 users logging over 11 billions hours in December.

What's also promising for Microsoft is that the upgrades are spread out.

"Windows 10 adoption is accelerating, with more than 40 percent of new Windows 10 devices becoming active since Black Friday," Microsoft states. "In fact, Windows continues to be on the fastest growth trajectory of any version of Windows -- ever -- outpacing Windows 7 by nearly 140 percent and Windows 8 by nearly 400 percent."

Microsoft also provided some stats on how people are using Windows 10. For example, Cortana has fielded over 2.5 billion questions since the OS launched, while Bing search queries on Windows 10 are 30 percent higher compared to previous versions of Windows.

Let's not forget about gaming. According to Microsoft, gamers have spent more than 4 billion hours playing PC titles on Windows 10 in 2015, and streamed more than 6.6 million hours of Xbox One games to Windows 10 PCs. That latter point is interesting because it ties in with Microsoft's plan to connect devices from different categories under a single ecosystem.

The adoption rate has also been good for the Windows Store, which has seen the number of paid transactions from PC and tablet customers double during the holiday season. In December, Windows 10 generated more than four and a half times the revenue per device compared to Windows 8.

It's a lot of braggadocios stats, though the underlying point is that Microsoft is executing on its strategy.

Follow Paul on Google+, Twitter, and Facebook

How To: Troubleshoot Your Home Network

Posted: 05 Jan 2016 12:00 AM PST


Fig 1 Network Troubleshooting

Home network care and feeding

Most homes have an ever-increasing number of devices to connected to the Internet. On top of that, most of these devices, from desktops that utilize a Wi-Fi connection, to laptops, tablets, smartphones, and now the IoT (Internet of Things), we are increasingly dependent on having a stable wireless connection throughout and around our residences. When this all works, it's nothing short of amazing, but when the Wi-Fi connection flakes out, it instantly becomes quite frustrating.

With some knowledge of home networking, the essentials of diagnosing a hiccupping wireless setup can be mastered. Along with that, it's possible to fine-tune the network to maximize performance.

When the Wi-Fi goes on the fritz, the first thing to do is to reboot the modem, the router, and the end device, such as the laptop (hereafter referred to more properly as the wireless client). This time-honored move of rebooting all the devices resets all the connections, and reestablishes the connection to the Internet. When this doesn't work is when things get more interesting, and a more methodical approach is indicated. While novices will focus on the wireless client to get things working, the educated move is to start at the source—specifically, to focus on the cable modem. Incidentally, while the majority of broadband in the United States comes from a cable provider, which requires a cable modem in a residential networking setup, this method can also be used for other types of ISPs (Internet service providers), such as DSL or fiber setups.

Fig 2 Network Troubleshooting

Speed metrics

The first thing to do is look at the broadband that is supplied from the ISP. While it is intuitively obvious that the Internet needs to be supplied correctly, it is a common mistake to not start with this. The best way to verify the connection is to disconnect the router from the modem, and to connect a computer directly, via a wire to the LAN port on the modem. If your desktop is not next to the modem, most notebooks have an Ethernet port, and can be used for this purpose. Remember to reboot the modem before making the physical connection, and make sure the LED's go on for the modem.

The first thing to check is whether the computer can connect to the Internet at all. The next thing to do is check the network speeds provided by your ISP. You can be check this by using a browser-based tool, such as www.speedtest.net, which can provide both a download and upload speed. This should then be compared to what your ISP should be providing via your broadband plan. If for some reason you don't know which speed tier you're on, check your cable bill or call your ISP directly to verify your plan speeds.

If the computer doesn't connect to the modem, verify that all the connections are fully made and tight (a small wrench can be used on the coaxial cabling connection), where the cable from outside connects to the modem. Double check that the Ethernet cable is fully seated from the modem to the computer, and the spring clip is completely inserted and seated. A second Ethernet cable can be exchanged between the modem and the computer to verify that the wire is not the issue. You can try using a second computer to ensure that the issue is not with the Ethernet port on the first computer. Also make sure the modem is powered on, and keep in mind that it will generally take a few minutes for it to boot. If all of this is done and the computer still will not connect to the Internet, then the issue is either with the modem itself or with the signal upstream from your ISP. It's time to call your ISP; they can run a line test to look for an issue on their end, and advise on whether it's time to replace the modem, or if a tech visit is required to work on issues on their end. Put simply, these are not user-serviceable issues.

In other situations, the modem will boot and there will be a connection, but the network may running slowly or inconsistently. Again, speeds can be verified by using Speedtest.net from a computer directly attached to the modem via a wire, with the router disconnected. While calling the ISP is certainly an option in this scenario, there are some items that can be looked into by the more tech savvy user.

Fig 3 Network Troubleshooting

The splitter

The splitter is a frequent failure point in a cable setup. Many users take advantage of "Triple play" deals to receive their Internet, TV, and phone services from a single provider. The single wire run from the utility pole (at the top with the black wire, labeled "IN"), gets split to supply connections to the cable modem and set-top video boxes (at the bottom with the white wires, labeled "OUT").

The splitter can cause issues via two different mechanisms. The first is that it is outside, and with continuous exposure to the elements, it can start to corrode. With several signals going through this splitter, it can easily compromise the connection. The fix is to replace the splitter, with some cable providers offering exchanges of the splitter for no additional cost. When techs are called out for Internet speed issues, replacing the splitter is often one of the first things they do.

The other issue is that the splitter does not divide the signal equally. If we look carefully at the three "OUT's," we can see that two are -7 dB, and the last one on the right is -3.5dB. The splitter is designed so that the modem gets the stronger signal, and therefore should be connected on the -3.5dB limb of the splitter, with the -7dB limbs for TV set-top boxes. In some cases, the splitter is not installed correctly, and this should be fixed as it can contribute to issues.

Fig 4 Network Troubleshooting

The modem

The modem is where the signal from the ISP gets converted and run into the Ethernet cable that can plug into a computer or wireless router. It is crucial to have an up-to-date modem that has all the channels that the ISP is supplying. The modem also has a page that can supply diagnostic information. On most modems, this is accessed by entering this address into a web browser: 192.168.100.1

This gives us a direct look into how the modem is performing. Start by looking at the downstream, and note that there are eight downstream channels. This tells us about the signal received from the ISP, and should be within -15dBmV to +15dBmV at a maximum, and ideally within -10 to +10dBmV. In the example screenshot above, we can see that all the signal levels are within these specs.

Next, we'll turn our attention to the upstream levels. This is a measure of the power your modem is requiring to push your data back to the cable provider. Looking at the screenshot, we can see that there are three channels for this on this modem. The power levels should be between +35 to +52dBmV, and in the example shown they're within these levels.

Another item to look at is the "SNR," which is the signal-to-noise ratio. This tells us how clean the signal is, with a higher number preferred. The SNR should be over 27 in 64QAM or greater than 32 in 256QAM, because at lower numbers there can be issues with packet loss.

The last thing to look at on the modem info page is the error rate. This is quantified as the "Correcteds," and the "Uncorrectables." It's normal to have a few packets of corrupted data received by the modem. In some cases, the data can be reconstructed via an algorithm, hence the term Correcteds, but sometimes it is corrupted beyond repair, and this is termed Uncorrectables. There is no hard and fast number for this, but the lower the better, with a rule of thumb being that the Correcteds and the Uncorrectables should be less than 10 percent of the octets, which is a measure of the amount of data sent through the modem.

Fig 5 Network Troubleshooting

Wireless interference

With the other parts of the network looked at, the remaining piece is the wireless component. There can be many sources of wireless interference, including cordless phones, cordless mice, Bluetooth devices, baby monitors, microwave ovens, and even Christmas lights. With all of these sources of wireless interference, it's important to account for surrounding Wi-Fi networks, and to optimize this aspect.

This process starts with software that can detect surrounding networks. For this example we're using Vistumbler, which is a freeware program for the Windows platform. A tablet or smartphone can also be used to detect competing networks, and there are apps available on both Android and iOS.

In the screenshot above, networks #1 and #2 are both from the same router at our residence. This gives us an idea of how strong our home network is, relative to our neighbors, by looking at the signal strength. By seeing that our network is the strongest one detected from within the house, we're off to a good start. In situations where you have a weaker router on a lower level, and your neighbor has a more powerful router on their upper level, this may not be the case.

Next, we look at the channels. In the first line, we see Channel 44, which is on the newer 802.11ac network and runs on 5GHz (in the United States, the 5GHz channels start at 36 and go up). Notice also that none of the surrounding networks use any 5GHz channels, and it becomes obvious that this is wide open, interference free, and why we do our networking on the 5GHz frequency when at all possible. This is fairly typical, given that many users are still using older wireless networking gear. Also, on the 5GHz frequency, there are 23 non-overlapping channels, giving a good chance that there will be a free one in all but the most congested areas.

While 5GHz is preferred, many devices still can only use the older 2.4GHz frequency. While there are 11 channels of Wi-Fi on this frequency (for the United States, in Japan they go up to 14), as they are closely spaced, there are really only three non-overlapping channels: 1, 6, and 11. Most current routers offer a Channel Autoselect feature that will allow the router to choose a free channel, but in the screenshot shown most of the networks are on Channel 6, while none are using Channel 1. In addition, Channel 7 will also cause interference with Channel 6 as they overlap. The best move here would be to manually change our router to Channel 1 to avoid any interference on this 2.4GHz frequency.

Keep it snappy

Internet connection issues remain a frustrating problem. Contributing to the challenge is that a well run network will require routine attention and maintenance. By understanding the various components of a residential network, including the modem, router, and splitter, the network can be optimized to provide the best experience for all users. The time you spend on a bit of maintenance can have substantial speed rewards.

Total Pageviews

statcounter

View My Stats