General Gaming Article

General Gaming Article


Bug in BitCoin Software Discovered, BitCoin Foundation Refutes Claim

Posted: 10 Feb 2014 03:43 PM PST

BitCoin LogoValue of BitCoin decreases

Today, BitCoin exchange website Mt. Gox announced the discovery of a bug in the BitCoin software affecting transactions of the crypto-currency. The bug was found a few days after the website noticed an issue with withdrawals, prompting it to temporarily pause all such transactions. However, Mt. Gox specified that withdrawals from a Mt. Gox wallet to an external address were the only type being affected and that any BitCoin transactions to and from Mt. Gox BitCoin addresses were not. 

In a statement Mt. Gox provided a simplified explanation of how the bug is affecting dealings saying, "A bug in the bitcoin software makes it possible for someone to use the Bitcoin network to alter transaction details to make it seem like a sending of bitcoins to a bitcoin wallet did not occur when in fact it did occur. Since the transaction appears as if it has not proceeded correctly, the bitcoins may be resent. MtGox is working with the Bitcoin core development team and others to mitigate this issue."

Because of the issue, and the temporary suspension of withdrawals, the value of BitCoin dropped from $800 to around $600. As of this article's publication its current value is at $684.

Mt. Gox went on to assure consumsers that BitCoin withdrawals to outside wallets will resume when the issue has been resolved. 

However, BitCoin core developers Gavin Andresen and Jeff Garzik have responded to Mt. Gox's claims of a bug in the BitCoin software. According to the developers, the BitCoin software is not at fault and that the problem is an old one.

In a post on the BitCoin Foundation blog, Andresen explained, "The issues that Mt. Gox has been experiencing are due to an unfortunate interaction between Mt. Gox's implementation of their highly customized wallet software, their customer support procedures, and their unpreparedness for transaction malleability, a technical detail that allows changes to the way transactions are identified."

Transaction malleability has been around since 2011, with the developers working to solve the issue. In the meantime, sites such as Mt. Gox should take steps to limit this problem, according to Garzik who spoke to CoinDesk, "There are certain security practices that sites like Mt. Gox need to follow. Most notably, customer support staff and related software must not assume that transaction IDs are unchangeable, prior to being confirmed in the block chain."

In his post, Andresen went on to explain that the development team has been working on the problem but that, "Finding the best and most responsible solution will take time." However, he assured users that "the Foundation is committed to working with companies to produce best practices to help improve software."

BitCoin currency has been increasing in value and usage with companies such as TigerDirect and Xidax recently accepting them as legal tender for their products. But with the situation revolving around Mt. Gox, could potential consumers be scared off from using the virtual currency, or will more people continue to adopt it as a form of payment?

PC Performance Tested

Posted: 10 Feb 2014 02:46 PM PST

Nvidia's new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs.With our lab coats donned, our test benches primed, and our benchmarks at the ready, we look for answers to nine of the most burning performance-related questions

If there's one thing that defines the Maximum PC ethos, it's an obsession with Lab-testing. What better way to discern a product's performance capabilities, or judge the value of an upgrade, or simply settle a heated office debate? This month, we focus our obsession on several of the major questions on the minds of enthusiasts. Is liquid cooling always more effective than air? Should serious gamers demand PCIe 3.0? When it comes to RAM, are higher clocks better? On the surface, the answers might seem obvious. But, as far as we're concerned, nothing is for certain until it's put to the test. We're talking tests that isolate a subsystem and measure results using real-world workloads. Indeed, we not only want to know if a particular technology or piece of hardware is truly superior, but also by how much. After all, we're spending our hard-earned skrilla on this gear, so we want our purchases to make real-world sense. Over the next several pages, we put some of the most pressing PC-related questions to the test. If you're ready for the answers, read on.

Core i5-4670K vs. Core i5-3570K vs. FX-8350

People like to read about the $1,000 high-end parts, but the vast majority of enthusiasts don't buy at that price range. In fact, they don't even buy the $320 chips. No, the sweet spot for many budget enthusiasts is around $220. To find out which chip is the fastest midrange part, we ran Intel's new Haswell Core i5-4670K against the current-champ Core i5-3570K as well as AMD's Vishera FX-8350.

AMD's FX-8350 has two cores up on the competition, but does that matter?

AMD's FX-8350 has two cores up on the competition, but does that matter?

The Test: For our test, we socketed the Core i5-4670K into an Asus Z87 Deluxe with 16GB of DDR3/1600, an OCZ Vertex 3, a GeForce GTX 580 card, and Windows 8. For the Core i5-3570K, we used the same hardware in an Asus P8Z77-V Premium board, and the FX-8350 was tested in an Asus CrossHair V Formula board. We ran the same set of benchmarks that we used in our original review of the FX-8350 published in the Holiday 2012 issue.

The Results: First, the most important factor in the budget category is the price. As we wrote this, the street price of the Core i5-4670K was $240, the older Core i5-3570K was in the $220 range, and AMD's FX-8350 went for $200. The 4670K is definitely on the outer edge of the budget sweet spot while the AMD is cheaper by a bit.

Intel's Haswell Core i5-4670K slots right into the high end of the midrange.

Intel's Haswell Core i5-4670K slots right into the high end of the midrange.

One thing that's not disputable is the performance edge the new Haswell i5 part has. It stepped away from its Ivy Bridge sibling in every test we ran by respectable double-digit margins. And while the FX-8350 actually pulled close enough to the Core i5-3570K in enough tests to go home with some multithreaded victories in its pocket, it was definitely kept humble by Haswell. The Core i5-4670K plain-and-simply trashed the FX-8350 in the vast majority of the tests that can't push all eight cores of the FX-8350. Even worse, in the multithreaded tests where the FX-8350 squeezed past the Ivy Bridge Core i5-3570K, Haswell either handily beat or tied the chip with twice its cores.

The Core i5-3570K was great in its day, but it needs more than that to stay on top.

The Core i5-3570K was great in its day, but it needs more than that to stay on top.

Even folks concerned with bang-for-the-buck will find the Core i5-4670K makes a compelling argument. Yes, it's 20 percent more expensive than the FX-8350, but in some of our benchmarks, it was easily that much faster or more. In Stitch.Efx 2.0, for example, the Haswell was 80 percent faster than the Vishera. Ouch.

So where does this leave us? For first place, we're proclaiming the Core i5-4570K the midrange king by a margin wider than Louie Anderson. Even the most ardent fanboys wearing green-tinted glasses or sporting an IVB4VR license plate can't disagree.

For second place, however, we're going to get all controversial and call it for the FX-8350, by a narrow margin. Here's why: FX-8350 actually holds up against the Core i5-3570K in a lot of benchmarks, has an edge in mulitithreaded apps, and its AM3+ socket has a far longer roadmap than LGA1155, which is on the fast track to Palookaville.

Granted, Ivy Bridge and 1155 is still a great option, especially when bought on a discounted combo deal, but it's a dead man walking, and our general guidance for those who like to upgrade is to stick to sockets that still have a pulse. Let's not even mention that LGA1155 is the only one here with a pathetic two SATA 6Gb/s ports. Don't agree? Great, because we have an LGA1156 motherboard and CPU to sell you.

Benchmarks
Core i5-4670K Core i5-3570K FX-8350
POV Ray 3.7 RC3 (sec) 168.53

227.75

184.8
Cinebench 10 Single-Core 8,500 6,866 4,483
Cinebench 11.5 6.95
6.41 6.90
7Zip 9.20 17,898 17,504 23,728
Fritz Chess 13,305 11,468 12,506
Premiere Pro CS6 (sec) 2,849 3,422 5,220
HandBrake Blu-ray encode  (sec) 9,042 9,539 8,400
x264 5.01 Pass 1 (fps) 66.3
57.1 61.3
x264 5.01 Pass 2 (fps) 15.8 12.7 15
Sandra (GB/s) 21.6 21.3 18.9
Stitch.Efx 2.0 (sec) 836 971 1,511
ProShow Producer 5 (sec) 1,275 1,463 1,695
STALKER: CoP low-res (fps) 173.5 167.3 132.1
3DMark 11 Physics 7,938 7,263 7,005
PC Mark 7 Overall 6,428 5,582 4,408
PC Mark 7 Storage 5,300 5,377 4,559
Valve Particle (fps) 180 155 119
Heaven 3.0 low-res (fps) 139.4 138.3 134.4

Best scores are bolded. Test bed described in text

Hyper-Threading vs. No Hyper-Threading 

Hyper-Threading came out 13 years ago with the original 3.06GHz Pentium 4, and was mostly a dud. Few apps were multithreaded and even Windows's own scheduler didn't know how to deal with HT, making some apps actually slow down when the feature was enabled. But the tech overcame those early hurdles to grow into a worthwhile feature today. Still, builders are continually faced with choosing between procs with and without HT, so we wanted to know definitively how much it matters.  

The Test: Since we haven't actually run numbers on HT in some time, we broke out a Core i7-4770K and ran tests with HT turned on and off. We used a variety of benchmarks with differing degrees of threadedness to test the technology's strengths and weaknesses.

The Results: One look at our results and you can tell HT is well worth it if your applications can use the available threads. We saw benefits of 10–30 percent from HT in some apps. But if your app can't use the threads, you gain nothing. And in rare instances, it appears to hurt performance slightly—as in Hitman: Absolution when run to stress the CPU rather than the GPU. Our verdict is that you should pay for HT, but only if your chores include 3D modeling, video encoding or transcoding, or other thread-heavy tasks. Gamers who occasionally transcode videos, for example, would get more bang for their buck from a Core i5-4670K.

Benchmarks
HT Off HT On
PCMark 7 Overall 6,308

6,348

Cinebench 11.5 6.95 8.88
Stitch.EFx 2.0 (sec) 772 772
ProShow Producer 5.0  (sec) 1,317 1,314
Premiere Pro CS6 (sec) 2,950 2,522
HandBrake 0.9.9 (sec) 1,200 1,068
3DMark 11 Overall X2,210 X2,209
Valve Particle Test (fps) 191 226
Hitman: Absolution, low res (fps) 92 84
Total War 2: Shogun CPU Test (fps) 42.4 41

Best scores are bolded. We used a Core i7-4770K on a Asus Z87 Deluxe, with a Neutron GTX 240 SSD, a GeForce GTX 580, and 16GB of DDR3/1600 64-bit, with Windows 8

Click the next page to read about air cooling vs water cooling


Air Cooling vs. Water Cooling 

There are two main ways to chill your CPU: a heatsink with a fan on it, or a closed-loop liquid cooler (CLC). Unlike a custom loop, you don't need to periodically drain and flush the system or check it for leaks. The "closed" part means that it's sealed and integrated. This integration also reduces manufacturing costs and makes the setup much easier to install. If you want maximum overclocks, custom loops are the best way to go. But it's a steep climb in cost for a modest improvement beyond what current closed loops can deliver.  

But air coolers are not down for the count. They're still the easiest to install and the cheapest. However, the prices between air and water are so close now that it's worth taking a look at the field to determine what's best for your budget. 

The Test: To test the two cooling methods, we dropped them into a rig with a hex-core Intel Core i7-3960X overclocked to 4.25GHz on an Asus Rampage IV Extreme motherboard, inside a Corsair 900D. By design, it's kind of a beast and tough to keep cool.

The Budget Class 

The Results: At this level, the Cooler Master 212 Evo is legend…ary. It runs cool and quiet, it's easy to pop in, it can adapt to a variety of sockets, it's durable, and it costs about 30 bucks. Despite the 3960X's heavy load, the 212 Evo averages about 70 degrees C across all six cores, with a room temperature of about 22 C, or 71.6 F. Things don't tend to get iffy until 80 C, so there's room to go even higher. Not bad for a cooler with one 120mm fan on it.

Entry-level water coolers cost substantially more, unless you're patient enough to wait for a fire sale. They require more materials, more manufacturing, and more complex engineering. The Cooler Master Seidon 120M is a good example of the kind of unit you'll find at this tier. It uses a standard 120mm fan attached to a standard 120mm radiator (or "rad") and currently has a street price of $60. But in our tests, its thermal performance was about the same, or worse, than the 212 Evo. In order to meet an aggressive price target, you have to make some compromises. The pump is smaller than average, for example, and the copper block you install on top of the CPU is not as thick. The Seidon was moderately quieter, but we have to give the nod to the 212 Evo when it comes to raw performance.

The Cooler Master 212 Evo has arguably the best price- performance ratio around.

The Cooler Master 212 Evo has arguably the best price-performance ratio around.

The Performance Class 

The Results: While a CLC has trouble scaling its manufacturing costs down to the budget level, there's a lot more headroom when you hit the $100 mark. The NZXT Kraken X60 CLC is one of the best examples in this class; its dual–140mm fans and 280mm radiator can unload piles of heat without generating too much noise, and it has a larger pump and apparently larger tubes than the Seidon 120M. Our tests bear out the promise of the X60's design, with its "quiet" setting delivering a relatively chilly 66 C, or about 45 degrees above the ambient room temp.

It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers.

It may not look like much, but the Kraken X60 is the Ferrari of closed-loop coolers.

Is there any air cooler that can keep up? Well, we grabbed a Phanteks TC14PE, which uses two heatsinks instead of one, dual–140mm fans, and retails at $85–$90. It performed only a little cooler than the 212 Evo, but it did so very quietly, like a ninja. At its quiet setting, it trailed behind the X60 by 5 C. It may not sound like much, but that extra 5 C of headroom means a higher potential overclock. So, water wins the high end.

Benchmarks
Seidon 120M Quiet / Performance Mode 212 Evo
Quiet / Performance Mode
Kraken X60 Quiet / Performance Mode TC14PE
Quiet / Performance Mode
Ambient Air 22.1 / 22.2

20.5 / 20

20.9 / 20.7 20 / 19.9
Idle Temperature 38 / 30.7 35.5 / 30.5 29.7 / 28.8 32 / 28.5
Load Temperature 78.3 / 70.8 70 / 67.3 66 / 61.8 70.3 / 68.6
Load - Ambient 56.2 / 48.6 49.5 / 47.3 45.1 / 41.1 50.3/ 48.7

All temperatures in degrees Celsius. Best scores bolded.

Is High-Bandwidth RAM worth it? 

Today, you can get everything from vanilla DDR3/1333 all the way to exotic-as-hell DDR3/3000. The question is: Is it actually worth paying for anything more than the garden-variety RAM?  

The Test: For our test, we mounted a Core i7-4770K into an Asus Z87 Deluxe board and fitted it with AData modules at DDR3/2400, DDR3/1600, and DDR3/1333. We then picked a variety of real-world (and one synthetic) tests to see how the three compared.

The Results: First, let us state that if you're running integrated graphics and you want better 3D performance, pay for higher-clocked RAM. With discrete graphics, though, the advantage isn't as clear. We had several apps that saw no benefit from going from 1,333MHz to 2,400MHz. In others, though, we saw a fairly healthy boost, 5–10 percent, by going from standard DDR3/1333 to DDR3/2400. The shocker came in Dirt 3, which we ran in low-quality modes so as not to be bottlenecked by the GPU. At low resolution and low image quality, we saw an astounding 18 percent boost.  

To keep you back on earth, you should know that cranking the resolution in the game all but erased the difference. To see any actual benefit, we think you'd really need a tri-SLI GeForce GTX 780 setup and expect that the vast majority of games won't actually give you that scaling. 

We think the sweet spot for price/performance is either DDR3/1600 or DDR3/1866.

Benchmarks
DDR3/1333 DDR3/1600 DDR3/2400
Stitch.Efx 2.0 (sec) 776

773

763
PhotoMatix HDR (sec) 181 180 180
ProShow Producer 5.0 (sec)
1,370 1,337 1,302
HandBrake 0.9.9 (sec) 1,142 1,077 1,037
3DMark Overall 2,211 2,214 2,215
Dirt 3 Low Quality (fps) 234 247.6 272.7
Price for two 4GB DIMMs (USD) $70 $73 $99

All temperatures in degrees Celsius. Best scores bolded.

Click the next page to see how two midrange graphics cards stack up against one high-end GPU!


One High-End GPU vs.Two Midrange GPUs

One of the most common questions we get here at Maximum PC, aside from details about our lifting regimen, is whether to upgrade to a high-end GPU or run two less-expensive cards in SLI or CrossFire. It's a good question, since high-end GPUs are expensive, and cards that are two rungs below them in the product stack cost about half the price, which naturally begs the question: Are two $300 cards faster than a single $600 card? Before we jump to the tests, dual-card setups suffer from a unique set of issues that need to be considered. First is the frame-pacing situation, where the cards are unable to deliver frames evenly, so even though the overall frames per second is high there is still micro-stutter on the screen. Nvidia and AMD dual-GPU configs suffer from this, but Nvidia's SLI has less of a problem than AMD at this time. Both companies also need to offer drivers to allow games and benchmarks to see both GPUs, but they are equally good at delivering drivers the day games are released, so the days of waiting two weeks for a driver are largely over.

2x Nvidia GTX 660 Ti vs. GTX 780

The Test: We considered using two $250 GTX 760 GPUs for this test, but Nvidia doesn't have a $500 GPU to test them against, and since this is Maximum PC, we rounded up one model from the "mainstream" to the $300 GTX 660 Ti. This video card was recently replaced by the GTX 760, causing its price to drop down to a bit below $300, but since that's its MSRP we are using it for this comparison. We got two of them to go up against the GTX 780, which costs roughly $650, so it's not a totally fair fight, but we figured it's close enough for government work. We ran our standard graphics test suite in both single- and dual-card configurations.

The Results: It looks like our test was conclusive—two cards in SLI provide a slightly better gaming experience than a single badass card, taking top marks in seven out of nine tests. And they cost less, to boot. Nvidia's frame-pacing was virtually without issues, too, so we don't have any problem recommending Nvidia SLI at this time. It is the superior cost/performance setup as our benchmarks show.

Nvidia's new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs.

Nvidia's new GK110-based GTX 780 takes on two ankle-biter GTX 660 Ti GPUs.

2x Radeon HD 7790 vs.Radeon HD 7970 GHz

The Test: For our AMD comparison, we took two of the recently released HD 7790 cards, at $150 each, and threw them into the octagon with a $400 GPU, the PowerColor Radeon HD 7970 Vortex II, which isn't technically a "GHz" board, but is clocked at 1,100MHz, so we think it qualifies. We ran our standard graphics test suite in both single-and-dual card configurations.

Two little knives of the HD 7790 ilk take on the big gun Radeon  HD 7970 .

Two little knives of the HD 7790 ilk take on the big gun Radeon HD 7970 .

The Results: Our AMD tests resulted in a very close battle, with the dual-card setup taking the win by racking up higher scores in six out of nine tests, and the single HD 7970 card taking top spot in the other three tests. But, what you can't see in the chart is that the dual HD 7790 cards were totally silent while the HD 7970 card was loud as hell. Also, AMD has acknowledged the micro-stutter problem with CrossFire, and promises a software fix for it, but unfortunately that fix is going to arrive right as we are going to press on July 31. Even without it, gameplay seemed smooth, and the duo is clearly faster, so it gets our vote as the superior solution, at least in this config.

Benchmarks
GTX 660 Ti SLI GTX 780 Radeon HD 7870 CrossFire Radeon HD 7970 GHz
3DMark Fire Strike 8,858

8,482

8,842 7,329
Catzilla (Tiger) Beta 7,682 6,933 6,184 4,889
Unigine Heaven 4.0 (fps)
33 35
30 24
Crysis 3 (fps) 26 24 15 17
Shogun 2 (fps) 60 48 51 43
Far Cry 3 (fps) 41 35 21 33
Metro: Last Light (fps) 24 22 13 14
Tomb Raider (fps) 18 25 24 20
Battlefield 3 (fps) 56 53 57 41

Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All tests, except for the 3DMark tests, are run at 2560x1600 with 4X AA.

PCI Express 2.0 vs. PCI Express 3.0

PCI Express is the specification that governs the amount of bandwidth available between the CPU and the PCI Express slots on your motherboard. We've recently made the jump from version 2.0 to version 3.0, and the PCI Express interface on all late-model video cards is now PCI Express 3.0, causing many frame-rate addicts to question the sanity of placing a PCIe 3.0 GPU into a PCIe 2.0 slot on their motherboard. The reason why is that PCIe 3.0 has quite a bit more theoretical bandwidth than PCIe 2.0. Specifically, one PCIe 2.0 lane can transmit 500MB/s in one direction, while a PCIe 3.0 lane can pump up to 985MB/s, so it's almost double the bandwidth, and then multiply that by 16, since there are that many lanes being used, and the difference is substantial. However, that extra bandwidth will only be important if it's even needed, which is what we wanted to find out.

The Test: We plugged an Nvidia GTX Titan into our Asus P9X79 board and ran several of our gaming tests with the top PCI Express x16 slot alternately set to PCIe 3.0 and PCIe 2.0. On this particular board you can switch the setting in the BIOS.

The Results: We had heard previously that there was very little difference between PCIe 2.0 and PCIe 3.0 on current systems, and our tests back that up. In every single test, Gen 3.0 was faster, but the difference is so small it's very hard for us to believe that PCIe 2.0 is being saturated by our GPU. It's also quite possible that one would see more pronounced results using two or more cards, but we wanted to "keep it real" and just use one card.

Benchmarks
GTX Titan PCIe 2.0 GTX Titan PCIe 3.0
3DMark Fire Strike 9,363

9,892

Unigine Heaven 4.0 (fps) 37 40
Crysis 3 (fps)
31 32
Shogun 2 (fps) 60 63
Far Cry 3 (fps) 38 42
Metro: Last Light (fps) 22 25
Tomb Raider (fps) 22 25

Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus P9X79 motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 7 Ultimate. All games are run at 2560x1600 with 4X AA except for the 3DMark tests.

PCIe x8 vs. PCIe x16

PCI Express expansion slots vary in both physical size and the amount of bandwidth they provide. The really long slots are called x16 slots, as they provide 16 lanes of PCIe bandwidth, and that's where our video cards go, for obvious reasons. Almost all of the top slots in a motherboard (those closest to the CPU) are x16, but sometimes those 16 lanes are divided between two slots, so what might look like a x16 slot is actually a x8 slot. The tricky part is that sometimes the slots below the top slot only offer eight lanes of PCIe bandwidth, and sometimes people need to skip that top slot because their CPU cooler is in the way or water cooling tubes are coming out of a radiator in that location. Or you might be running a dual-card setup, and if you use a x8 slot for one card, it will force the x16 slot to run at x8 speeds. Here's the question: Since a x16 slot provides 3.2GB/s of bandwidth in one direction, and a x8 slot pumps 1.6GB/s, is your performance hobbled by running at x8?

The Test: We wedged a GTX Titan into first a x16 slot and then a x8 slot on our Asus P9X79 motherboard and ran our gaming tests in order to compare the difference.

The Results: We were surprised by these results, which show x16 to be a clear winner. Sure, it seems obvious, but we didn't think even current GPUs were saturating the x8 interface, but apparently they are, so this is an easy win for x16.

The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white).

The Asus P9X79 offers two x16 slots (blue) and two x8 slots (white).

Benchmarks
GTX Titan PCIe x16 GTX Titan PCIe x8
3DMark Fire Strike 9,471

9,426

Catzilla (Tiger) Beta 7,921 7,095
Unigine Heaven 4.0 (fps)
40 36
Crysis 3 (fps) 32 37
Shogun 2 (fps) 64 56
Far Cry 3 (fps) 43 39
Metro: Last Light (fps) 25 22
Tomb Raider (fps) 25 23
Battlefield 3 (fps) 57 50

Tests performed on an Asus P9X79 Deluxe motherboard.

IDE vs. AHCI

If you go into your BIOS and look at the options for your motherboard's SATA controller, you usually have three options: IDE, AHCI, and RAID. RAID is for when you have more than one drive, so for running just a lone wolf storage device, you have AHCI and IDE. For ages we always just ran IDE, as it worked just fine. But now there's AHCI too, which stands for Advanced Host Controller Interface, and it supports features IDE doesn't, such as Native Command Queuing (NCQ), and hot swapping. Some people also claim that AHCI is faster than IDE due to NCQ and the fact that it's newer. Also, for SSD users, IDE does not support the Trim command, so AHCI is critical to an SSD's well-being over time, but is there a speed difference between IDE and AHCI for an SSD? We set to find out.

The Test: We enabled IDE on our SATA controller in the BIOS, then installed our OS. Next, we added our Corsair test SSD and ran a suite of storage tests. We then enabled AHCI, reinstalled the OS, re-added the Corsair Neutron test SSD, and re-ran all the tests.

The Results: We haven't used IDE in a while, but we assumed it would allow our SSD to run at full speed even if it couldn't NCQ or hot-swap anything. And we were wrong. Dead wrong. Performance with the SATA controller set to IDE was abysmal, plain and simple.

Benchmarks
Corsair Neutron GTX IDE Corsair Neutron GTX AHCI
CrystalDiskMark    
Avg. Sustained Read (MB/s) 224 443
Avg. Sustained Write (MB/s)
386 479
AS SSD - Compressed Data    
Avg. Sustained Read (MB/s) 210 514
Avg. Sustained Write (MB/s) 386 479
ATTO    
64KB File Read (MB/s, 4QD) 151 351
64KB File Write (MB/s, 4QD) 354 485
Iometer    
4KB Random Write 32QD
(IOPS)
19,943 64,688
PCMark Vantage x64 6,252 41,787

Best scores are bolded. All tests conducted on our hard drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.

Click the next page to read about SSD RAID vs a single SSD!


 

SSD RAID vs. Single SSD

This test is somewhat analogous to the GPU comparison, as most people would assume that two small-capacity SSDs in RAID 0 would be able to outperform a single 256GB SSD. The little SSDs have a performance penalty out of the gate, though, as SSD performance usually improves as capacity increases because the controller is able to grab more data given the higher-capacity NAND wafers—just like higher-density platters increase hard drive performance. This is not a universal truth, however, and whether or not performance scales with an SSD's capacity depends on the drive's firmware, NAND flash, and other factors, but in general, it's true that the higher the capacity of a drive, the better its performance. The question then is: Is the performance advantage of the single large drive enough to outpace two little drives in RAID 0?

Before we jump into the numbers, we have to say a few things about SSD RAID. The first is that with the advent of SSDs, RAID setups are not quite as common as they were in the HDD days, at least when it comes to what we're seeing from boutique system builders. The main reason is that it's really not that necessary since a stand-alone SSD is already extremely fast. Adding more speed to an already-fast equation isn't a big priority for a lot of home users (this is not necessarily our audience, mind you). Even more importantly, the biggest single issue with SSD RAID is that the operating system is unable to pass the Trim command to the RAID controller in most configurations (Intel 7 and 8 series chipsets excluded), so the OS can't tell the drive how to keep itself optimized, which can degrade performance of the array in the long run, making the entire operation pointless. Now, it's true that the drive's controller will perform "routine garbage collection," but how that differs from Trim is uncertain, and whether it's able to manage the drive equally well is also unknown. However, the lack of Trim support on RAID 0 is a scary thing for a lot of people, so it's one of the reasons SSD RAID often gets avoided. Personally, we've never seen it cause any problems, so we are fine with it. We even ran it in our Dream Machine 2013, and it rocked the Labizzle. So, even though people will say SSD RAID is bad because there's no Trim support, we've never been able to verify exactly what that "bad" means long-term.

It's David and Goliath all over again, as two puny SSDs take on a bigger, badder drive.

It's David and Goliath all over again, as two puny SSDs take on a bigger, badder drive.

The Test: We plugged in two Corsair Neutron SSDs, set the SATA controller to RAID, created our array with a 64K stripe size, and then ran all of our tests off an Intel 520 SSD boot drive. We used the same protocol for the single drive.

The Results: The results of this test show a pretty clear advantage for the RAIDed SSDs, as they were faster in seven out of nine tests. That's not surprising, however, as RAID 0 has always been able to benchmark well. That said, the single 256 Corsair Neutron drive came damned close to the RAID in several tests, including CrystalDiskMark, ATTO at four queue depth, and AS SSD. It's not completely an open-and-shut case, though, because the RAID scored poorly in the PC Mark Vantage "real-world" benchmark, with just one-third of the score of the single drive. That's cause for concern, but with these scripted tests it can be tough to tell exactly where things went wrong, since they just run and then spit out a score. Also, the big advantage of RAID is that it boosts sequential-read and -write speeds since you have two drives working in parallel (conversely, you typically won't see a big boost for the small random writes made by the OS). Yet the SSDs in RAID were actually slower than the single SSD in our Sony Vegas "real-world" 20GB file encode test, which is where they should have had a sizable advantage. For now, we'll say this much: The RAID numbers look good, but more "real-world" investigation is required before we can tell you one is better than the other.

Benchmarks
1x Corsair Neutron 256GB 2x Corsair Neutron 128GB RAID 0
CrystalDiskMark    
Avg. Sustained Read (MB/s) 512 593
Avg. Sustained Write (MB/s)
436 487
AS SSD - Compressed Data    
Avg. Sustained Read (MB/s) 506 647
Avg. Sustained Write (MB/s) 318 368
ATTO    
64KB File Read (MB/s, 4QD) 436 934
64KB File Write (MB/s, 4QD) 516 501
Iometer    
4KB Random Write 32QD
(IOPS)
70,083 88,341
PCMark Vantage x64
70,083 23,431
Sony Vegas Pro 9 Write (sec) 343 429

Best scores are bolded. All tests conducted on our hard-drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.

Benchmarking: Synthetic vs. Real-World 

There's a tendency for testers to dismiss "synthetic" benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to optimize for them, and on occasion, those actions would actually hurt performance in real games and applications. 

The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the performance of a single component or system. At the same time, real-world benchmarks aren't untouchable. If a developer receives funding or engineering support from a hardware maker to optimize a game or app, does that really make it neutral? There is the argument that it doesn't matter because if there's "cheating" to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.

In the end, it's probably more important to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft Sandra, for example, is a popular synthetic benchmark with a slew of tests for various components. We use it for memory bandwidth testing, for which it is invaluable—as long as the results are put in the right context. A doubling of main system memory bandwidth, for example, doesn't mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real-world benchmarks, too.

Avoid the Benchmarking Pitfalls

Even seasoned veterans are tripped up by benchmarking pitfalls, so beginners should be especially wary of making mistakes. Here are a few tips to help you on your own testing journey.

Put away your jump-to-conclusions mat. If you set condition A and see a massive boost—or no difference at all when you were expecting one—don't immediately attribute it to the hardware. Quite often, it's the tester introducing errors into the test conditions that causes the result. Double-check your settings and re-run your tests and then look for feedback from others who have tested similar hardware to use as sanity-check numbers.

When trying to compare one platform with another (certainly not ideal)—say, a GPU in system A against a GPU in system B—be especially wary of the differences that can result simply from using two different PCs, and try to make them as similar as possible. From drivers to BIOS to CPU and heatsink—everything should match. You may even want to put the same GPU in both systems to make sure the results are consistent.

Use the right benchmark for the hardware. Running Cinebench 11.5—a CPU-centric test—to review memory, for example, would be odd. A better fit would be applications that are more memory-bandwidth sensitive, such as encoding, compression, synthetic RAM tests, or gaming.

Be honest. Sometimes, when you shell out for new hardware, you want it to be faster because no one wants to pay through the nose to see no difference. Make sure your own feelings toward the hardware aren't coloring the results.

The Most Graphically Demanding PC Games

Posted: 10 Feb 2014 12:46 PM PST

arma 3Update: By popular demand, we've added ARMA 3 to our roundup! Read on to see how it does against the competition!

At Maximum PC we love pushing our PCs to their limits by testing high-end games at maximum settings. To reach these limits, you'll need to fire up the most über-demanding games. What are the most graphically-demanding games you ask? We've thrown together a list of the gnarliest PC games that will give your precious gaming rig a kick-ass workout. 

Testing Methodology:

We tested each game at maximum settings on a 1920x1080 display. Our modest mid-range test rig consisted of an i7-2700K CPU overclocked to 4.5GHz, a GTX 680 video card overclocked to 1140MHz, and 8GB of G.Skill DDR3 RAM. We first started out disabling motion blur, which is a frame rate crutch, and cranked up all of the other settings as high as they would go. Another setting that was crucial to disable was V-Sync, so that our frame rate was not capped with our 60Hz refresh rate. We played each game for 15 minutes, and recorded its average frame rate using FRAPS. Each frame rate listed below is that of our playthrough and may not be exactly repeatable because the frame rate averages were captured with real-world dynamic testing, which may vary from playthrough to playthrough, even on a rig with the exact same hardware. Still, our tests should provide an accurate measure of relative performance between titles. The rankings are listed from least taxing to most based on average frame rate count.

Call of Duty Ghosts: #11

Game Engine: IW Engine

COD: Ghosts

The latest installment of Call of Duty isn't too taxing to run as we experienced an average frame rate of 67.9 FPS. In our gameplay session we floated through space and ran around inside a few burning buildings during the game's first mission. COD games aren't very challenging to run because they still use the same game engine as COD 4, which came out over 6 years ago. To put it into perspective, the old engine is easy enough for last gen consoles to run at 60 FPS. Maybe the next installment in the series will finally change the game's outdated game engine so it can rival the graphical capabilities of other modern military shooters.

Battlefield 4: #10

Game Engine: Frostbite 3

BF4

We were surprised by the fact that Battlefield 4 landed in 10th place because it's generally considered one of the most demanding shooters out there. To be fair, Nvidia is the original culprit behind BF4's initial low ranking. Since its newer 331.82 driver, we saw a whopping 18% improvement in performance over its older driver. If we had used the older Nvidia driver we would have seen our frame rates at around 47 FPS and BF4 would have been ranked at number six, but instead we experienced a solid 59 FPS, knocking it down to number nine overall. We played through the game's first mission and saw our cover blown up by grenades, bullets, and mortars, causing our frame rate to dip to around 45 FPS at times, but we also saw it go as high as 80 FPS. 

Crysis: #9

Game Engine: CryEngine 2

Crysis

When Crytek made Crysis they wanted to make a "future proof" game and we can say that six years later, they have successfully done so by garnering the 9th spot on this list as we only garnered an average frame rate of 58.2 FPS. What's to blame for the relatively low frame rate for such an old game? Particle effects are hot and heavy in Crysis and they caused our frame rate to dip while testing, throw in some extreme physics (not to be confused with PhysX), and some realistic water effects, and you get a six-year old game that's even hard to run even on an overclocked GTX 680.

Hitman Absolution: #8

Game Engine: Glacier 2

Hitman Absolution

We tested Hitman Absolution by sleuthing around the first level killing foes covertly snapping necks with our bare hands. We then got into a firefight with few of the security guards and killed several dozen more enemies before finishing our benchmark run. The end result was a frame rate that was 53.8 FPS and made Hitman our number six game overall. Hitman is quite CPU heavy, so our relatively low frame rate could have been due to getting bottlenecked by our 2700K CPU not being able to muster physics calculations fast enough to keep up with our overclocked GTX 680 GPU. 

GTA IV: #7

Game Engine: Rockstar Advanced Game Engine (RAGE)

GTA IV

Yes, we're upset as anybody for the lack of a PC version of GTA V, but even the fourth game in the series (released in 2008) made our mid-range machine struggle. We only saw an average frame rate of 53.21 FPS, while driving around crazily through Liberty City, where we would eventually end up picking fights with random pedestrians. It's hard to believe that this game came out almost five years ago! GTA IV, however, doesn't look very impressive by today's gaming standards and we blame the game's demanding hardware performance on poor PC optimization. The engine behind the game's demanding performance uses an amalgamation of three different engines, including Rockstar's RAGE engine, Euphoria engine, and Bullet Physics Library. Hitman Absolution, by comparison, looks much better than GTA IV and has almost the same frame rate. 

Click the next page for the top five most graphically demanding PC games!

 


 

Far Cry 3: #6

Game Engine: Dunia Engine 2

Far Cry 3

We started our playthrough in Far Cry 3 running through a tropical forest and then proceeded by stealing an abandoned dirt-splattered car. Once we were done joyriding around the island we went for a swim in the ocean and went to visit a neighboring island. We then got into an epic battle with some of the locals blowing up explosive barrels and stabbing our foes straight through the chest with our machete. Far Cry 3 lands at number six on our list with its 41 FPS and this score is likely due to its large amount of particle effects when explosive barrels are, well, exploded, and also due to the fact that there's tons of vegetation to be rendered as you walk around the various islands. 

The Witcher 2: Assassins of Kings: #5

Game Engine: REDengine with Havok Physics

The Witcher 2

The Witcher 2 gave us a heavy helping of medieval combat throwing us into a bloody gladiator arena where we faced hordes of enemies. We found one setting enabled which caused our frame rate to be cut down to a meager 32.62 FPS, which was the game's Ãœbersampling option. What's Ãœbersampling? It's Super Sampling Anti Aliasing meaning that The Witcher 2 made our rig render the game at 4K, and then downsize that image to fit our 1920x1080 display. When we turned off Ãœbersampling the game ran at a buttery smooth 60+ FPS. 

Crysis 3: #4

Game Engine: CryEngine 3

Crysis 3

Like other Crysis games, Crysis 3 does much of the same as its predecessors giving users a heavy dose of particle effects, high-resolution textures, and tons of crazy physics. We ran around the game's first level, which had us going through a rainstorm, while quietly assassinating our foes with a silenced pistol and tactical bow. Exploding barrels of gasoline killed our frame rate in Crysis 3 just like in Far Cry 3 making it dip to an abysmal 13 FPS. In the end we were only able to get 28.08 FPS out of the title, putting it at number three on our graphically-intensive list.   

ARMA 3: #3 (Added: 2-10-14)

Game Engine: Real Virtuality 4

ARMA 3

In testing ARMA 3, we first disabled motion blur and cranked up depth of field as high as it would go. We then maxed out shadows, objects, and overall visibility. Finally, we put FSAA (Full Screen Anti-Aliasing) to 8X, and Anisotropic filtering to Ultra.

We started up ARMA 3's first mission Drawdown 2035 for our test run. Yes, we understand our frame rate would have been lower if we had jumped into a multiplayer match, but we didn't want connection issues to impact our frame rate score. Our playthrough consisted of a helicopter ride to a dusty-brown military base. From there, we grabbed a Humvee and drove down a grass covered gorge to find one of our fallen comrades. We then got into a firefight with some of the local militia and ended up with an average frame rate of 25.77 FPS, putting ARMA 3 at number three between Crysis 3 and Tomb Raider.

Tomb Raider (2013): #2

Game Engine: Modified Crystal Engine

Tomb Raider

Crystal Dynamics brought everyone a Tomb Raider game that rebooted the franchise, and gave gamers stunning hair effects with AMD's TressFX setting. We tested Tomb Raider thinking that TressFX would be the reason behind its super low frame rate, as we barely were able to scrape together 24.8 FPS. We looked at the game's settings to find that it uses Super Sampling just like The Witcher 2 which made our GPU work extra hard to render the game at 4K to have it then downscaled to 1920x1080.

Our benchmark run consisted of killing many rabid wolves with our bow and arrow, while running through a dark, dense green forest. We tested the game with 2xSSAA and found our frame rate was increased to 45 FPS. The lesson learned from our testing is that SSAA is very demanding and by disabling it, yes, you'll get some jagged edges, but you can receive a massive performance boost by either disabling it or scaling it back just a bit.

Metro Last Light: #1

Game Engine: 4A Engine

Metro Last Light

When the first Metro game came out it was a difficult title for PCs to play and made frame rates drop hard and fast. The second title, Metro Last Light, isn't very different, as it takes the top spot for the Most Demanding PC Game on our list. 

The game's extreme PhysX effects and vast amount of tessellation are the culprits behind our low frame rate, which was an unplayable 22.3 FPS. With PhysX turned up, we saw tons of particle and water effects, which made everything sluggish, as we ran through the Russian swampland of the first level in Metro Last Light. We maxed out AA to 4XAA on top of that, which amplified how many times PhysX was rendered and ultimately this led to the demise of our rig's precious frame rate. 

Benchmark Chart:

graph

Here's a bar chart measuring average frame rate for each title.

Conclusion: 
We've seen some interesting results with our tests and came away surprised by how graphically intensive an old game like Crysis still is. Tomb Raider also surprised us with how taxing it was too. It's worth noting that PC gaming willl likely get a large graphical leap with the recent release of the next-gen consoles. The next few months look promising with a plethora of graphically demanding titles coming out which include Watch Dogs and Titanfall. It will be interesting to see what titles populate this list in the years to come.

Bill Gates Talks Condoms, Computers, and Other Topics in Reddit AMA

Posted: 10 Feb 2014 12:18 PM PST

Bill Gates Reddit AMAMicrosoft co-founder makes his second Reddit AMA appearance in a year

It was almost a year ago to the day when Microsoft co-founder Bill Gates jumped into an AMA (Ask Me Anything) session on Reddit. There have been a lot of changes since then culminating in the promotion of Satya Nadella to Chief Executive Officer, who replaced outgoing chief Steve Ballmer. Bill Gates addressed that topic and more in another Reddit AMA today, and also revealed his "most expensive guilty pleasure purchase."

"Owning a plane is a guily pleasure," Gates said. ""Warren Buffett called his the Indefensible. I do get to a lot of places for Foundation work I wouldn't be able to go to without it."

One of the first questions he answered was whether or not he was having any luck with his Foundation's condom design competition. Gates called it a "sensitive topic," adding that one the grantees is currently using carbon nanotubes to the reduce the thickness of condoms.

When asked about his new role at Microsoft, Gates sidestepped the question.

"I am excited about how the cloud and new devices can help us communicate and collaborate in new ways," Gates said. "The OS won't just be on one device and the information won't just be files - it will be your history including being able to review memories of things like kids growing up. I was thrilled Satya asked me to pitch in to make sure Microsoft is ambitious with its innovation. Even in Office there is a lot more than can be done."

However, later on in the session Gates said he plans to divvy up his time by devoting around two-thirds to his foundation, and one-third to Microsoft, the latter of which will mostly be focused on "product work."

For more of what he had to say, head over to Reddit.

Follow Paul on Google+, Twitter, and Facebook

Kaspersky Counts Over 10 Million Malicious Android Applications

Posted: 10 Feb 2014 11:44 AM PST

Android ShirtAndroid is by far the biggest target of mobile malware

Security firm Kaspersky says it has logged 10 million dubious Android applications to date. It comes down to a numbers game for cyber criminals, and since Android is the most popular mobile operating system on the planet  -- market research firm Canalys estimates that Android accounted for 80 percent of smartphones shipped in 2013 -- it attracts the most attention from malware writers.

"In most cases malicious programs target the user's financial information. This was the case, for example, with the mobile version of Carberp Trojan that originated in Russia," Kaspersky explains. "It steals user credentials as they are sent to a bank server."

Kaspersky also notes that over 98 percent of mobile malware is aimed at Android -- no other OS gets anywhere close, the security firm says. While Android's market share plays a big role, the prevalence of third-party app stores and Android's open architecture both play roles in the reason why it's such a popular target.

"We do not expect this trend to change in the near future," Kaspersky says.

SMS Trojans lead the way, followed by backdoor malware. Furthermore, Kaspersky says 62 percent of malicious applications are elements of mobile botnets.

So, what can you do? Kaspersky offers up a handful of tips, such as recommending against activating the "developer mode" on Android devices. The security outfit also warns against installing applications from third-party sources and carefully studying the rights that seemingly legitimate apps request. And of course Kaspersky recommends using protection software.

Follow Paul on Google+, Twitter, and Facebook

Asus VivoTab Note 8 Proves Popular in Microsoft Store, Sells Out at $329

Posted: 10 Feb 2014 10:05 AM PST

Asus VivoTab Note 8Price is right for Asus's VivoTab Note 8

One of the items Asus unveiled at the Consumer Electronics Show (CES) in Las Vegas last month was its upcoming VivoTab Note 8 tablet. Apparently there were a fair number of buyers waiting for this slate -- Microsoft began offering the VivoTab Note 8 online for $329 over the weekend and it now shows as being out of stock. That's pretty impressive, assuming Microsoft didn't start off with just a small quantity.

Asus's VivoTab Note 8 (M80T) features an 8-inch IPS display with a 1280x800 resolution and 5-finger multi-touch support. It also has an Intel Atom Z3470 processor clocked at 1.33GHz (up to 1.86GHz via Turbo Boost), 2GB of LPDDR3-1600 RAM, 32GB of eMMC storage, micro SD card slot, micro USB 2.0 port, headphone output/microphone input combo port, 1.2MP front-facing camera, 5MP rear-facing camera, 802.11n Wi-Fi (Miracast enabled), Bluetooth, Windows 8.1, and a 1-cell lithium-ion battery good for up to 8 hours of run time.

Adding to the value of its $329 price tag is the fact that it comes pre-loaded with Microsoft Office Home and Student (Word, Excel, PowerPoint, and OneNote). It also comes with a digitizer pen and 8GB recovery micro SD card.

It's not clear when Microsoft have more stock, but if it's something you're interested in, you can keep an eye on the VivoTab Note 8's product page at the Microsoft Store.

Follow Paul on Google+, Twitter, and Facebook

Microsoft Offers Tips to Tear Your Family and Friends Away from Windows XP

Posted: 10 Feb 2014 08:24 AM PST

Windows XP MonitorRedmond wants to rid the world of Windows XP

Support for Windows XP will end in less than two months, and if you know of family members or friends who are still running the legacy operating system, Microsoft has some tips. In a recent blog post, Microsoft's Brandon LeBlanc suggested ways you can help your loved ones rid themselves of Windows XP before support officially ends on April 8, 2014. One of those ways is to upgrade their PCs to Windows 8.1.

Provided they meet the minimum system requirements -- 1GHz or faster processor, 1GB (32-bit) or 2GB (64-bit) of RAM, 16GB (32-bit) or 20GB (64-bit) of HDD space, and a DX9 or above GPU with WDDM driver -- Microsoft suggests downloading and running the Windows Upgrade Assistant on their PCs and then making the switch.

If that doesn't work out, LeBlanc's next tip is to "get a new PC," plain and simple as that.

"We hope that this end of support page for Windows XP on Windows.com and all the resources there is helpful to you and can be something you can use to help your friends and family get off Windows XP," LeBlanc states. "As we get close to April 8th, we'll continue to publish blog posts about the latest offers on new devices and resources for to help people get off Windows XP."

In case you haven't heard, after April 8, Microsoft will no longer provide updates to Windows XP, nor will technical support be available. Though this has been know for some time, a large number of users continue to cling to the legacy OS -- Net Applications figures some 29 percent of all desktops are running XP, while Stat Counter says it's about 18 percent.

AMD Ends the Speculation, Gives Radeon R7 250X a Proper Launch

Posted: 10 Feb 2014 07:57 AM PST

AMD Radeon R7 250XA sub-$100 graphics card for 1080p gaming

AMD today formally introduced the Radeon R7 250X, an affordable graphics card aimed at gamers looking to play their titles at Full HD 1080p. Some vendors already had the new SKU listed as early as last week, though it should be a lot easier to find starting today and going forward. It's a $99 card, give or take a few dollars depending on what AMD's hardware partners do with the reference design -- the chip designer says custom cooled designs are ready to launch.

The card itself is basically a rebranded Radeon HD 7770 with a new price (the Radeon HD 7770 debuted at $159). It sports a 28nm Cape Verde Graphics Core Next (GCN 1.0) GPU with 640 stream processors, 40 texture units, 16 ROPs, and 1GB or 2GB of GDDR5 running at 4.5GHz (effective) on a 128-bit bus; the engine clock runs at 1,000MHz.

AMD is going after Nvidia's GeForce GTX 650, a slightly higher priced part that doesn't quite keep up with the R7 250X, or so that's what AMD's own benchmarking shows. For example, AMD offers up the following benchmark comparisons:

  • Call of Duty: Ghosts: R7 250X = 42.6fps, GTX 650 = 28.2fps
  • Counter Strike: Global Offensive: R7 250X = 130fps, GTX 650 = 96.5fps
  • DOTA 2: R7 250X = 77.6fps, GTX 650 = 41.7fps
  • Starcraft II: R7 250X = 72.6fps, GTX 650 = 62.6fps
  • World of Tanks: R7 250X = 39fps, GTX 650 = 29fps

AMD Radeon R7 250X Benchmarks

Keep in mind those are AMD's benchmarks, not ours. It's also worth pointing out that the R7 250X doesn't support AMD's new TrueAudio technology. However, it does support Mantle, along with DirectX 11.2 and OpenGL 4.3.

Follow Paul on Google+, Twitter, and Facebook

Newegg Daily Deals: Asus M5A97 R2.0 AM3+ AMD 970 Mobo, AMD FX-8320 Vishera, and More!

Posted: 10 Feb 2014 06:24 AM PST

AMD Mobonewegg logo

Top Deal:

It's time to put that Prescott system out to pasture. Sure, it was a good run, but in order to get with the times, you have to pull it off life support and give life to a new rig. Don't have a big budget to play with? You could try going with AMD this time. One route is to start with today's top deal for an Asus M5A97 R2.0 AM3+ AMD 970 ATX Motherboard for $80 with free shipping (normally $90 -- use coupon code: [EMCPHPD39]). This board has modern day amenities like a UEFI BIOS, SATA 6Gbps ports, USB 3.0 support, and the list goes on.

Other Deals:

Asus VN279Q Black 27-inch 5ms Monitor w/ Built-in Speakers for $250 with free shipping (normally $270 - use coupon code: [EMCPHPD52]; additional $30 Mail-in rebate)

Asus VS Series VS247H-P Black 23.6"-inch 2ms Monitor for $140 with free shipping (normally $160 - use coupon code: [EMCPHPD99]; additional $20 Mail-in rebate)

Fractal Design Define R4 w/ Window Black Pearl Silent ATX Mid Tower Case for $90 with free shipping (normally $100 - use coupon code: [EMCPHPD34])

AMD FX-8320 Vishera 3.5GHz Socket AM3+ 125W Eight-Core Desktop Processor for $150 with free shipping (normally $160 - use coupon code: [EMCPHPD36])

Kingston Introduces New UHS-I U3 SDXC Cards

Posted: 09 Feb 2014 10:46 PM PST

Speedy cards mean speedier transfers

Kingston is introducing a brand new line of super-quick SD cards that will transfer your data in the blink of an eye. The new lineup includes models with part numbers SDA3 and range from 16 GB to 64 GB. Denoted by red stickers, these new SD cards match the SD Association's latest specification of UHS-I U3, and can read and write at 90 MB/s and 80 MB/s.

These new cards, at the U3 specification, guarantee at least 30 MB/s for both reading and writing, but you need a card reader that's capable of speeds coming from USB 3.0. Both the 16 and 32 GB cards are SDHC cards with FAT32 formatting, and the 64 GB card is an SDXC card formatted via exFAT.

Check out Kingston's official site here for additional information.

MMO Updates

MMO Updates


The Nexus Telegraph: Making it how you'd like in WildStar

Posted: 10 Feb 2014 09:00 AM PST

Filed under: , , , , , , , ,

I made that!  Well... all right, I didn't.  But I'm going to unmake it!
I freely admit that I have not dived heavily into crafting in the WildStar beta, for the same reason that there is a lot of stuff in the WildStar beta that I have not heavily invested in. That reason is simple: I plan to be playing this game for a long while, and I'd really like to avoid burning out before it even releases. I didn't adhere to that rule in the Final Fantasy XIV beta and kind of felt the pinch, so this is a rule I learned the hard way.

That having been said, I've fooled around with it enough to be really excited after the last interview I had regarding the crafting experience. What I heard confirmed my limited experiences and offered some interesting food for thought. There are a couple of elements that might seem counterproductive and a lot more that are worth looking forward to in the future.

Continue reading The Nexus Telegraph: Making it how you'd like in WildStar

MassivelyThe Nexus Telegraph: Making it how you'd like in WildStar originally appeared on Massively on Mon, 10 Feb 2014 12:00:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

    World of Speed aims to satisfy racing itch

    Posted: 10 Feb 2014 08:00 AM PST

    Filed under: , , , ,

    If putting the pedal to the metal is second nature to you, you might be interested in World of Speed, a new racing MMO announced for the PC. Developed by Slightly Mad Studios, this free-to-play game includes a vast array of cars from city runners to historic racing models and a variety of venues from true-to-life tracks to conglomerate concoctions carved from roadways across cities like London and San Francisco.

    Besides team and club competitions, World of Speed also sports unique missions and objectives in every race, live events, players challenges, and the Airfield social hub. As an MMO, the game will receive new content regularly in the form of new tracks, cars, and gameplay modes. Interested drivers can sign up for beta on the official site.

    [Source: Slightly Mad Studios press release]

    MassivelyWorld of Speed aims to satisfy racing itch originally appeared on Massively on Mon, 10 Feb 2014 11:00:00 EST. Please see our terms for use of feeds.

    Permalink | Email this | Comments

      Prime World dishing out $38M of in-game currency to players today

      Posted: 10 Feb 2014 07:30 AM PST

      Filed under: , , , ,

      Prime World
      Prime World might only be in "beta," but it's getting the star treatment from publisher Nival today in the form of a massive giveway. Over $38 million in cash shop currency will be distributed among beta testers today at 4:00 p.m. EDT.

      When the giveaway happens this afternoon, each registered tester will get 3,000 gold worth around $100 that can be spent in the cash shop as well as a free exclusive skin. We're giving you a heads-up because you can get in on this too as long as you register before 4:00 p.m. on the site or through Steam.

      Nival is hosting the giveaway to show off the game's progress, including a reduction in matchmaking queue times and the merging of the Russian and English populations of the game.

      [Source: Nival press release]

      MassivelyPrime World dishing out $38M of in-game currency to players today originally appeared on Massively on Mon, 10 Feb 2014 10:30:00 EST. Please see our terms for use of feeds.

      Permalink | Email this | Comments

        Bless Online's Korean beta hits 100,000 sign-ups in four days

        Posted: 10 Feb 2014 07:00 AM PST

        Filed under: , , , ,

        Bless Online's beta hits 100,000 sign-ups in four days
        If you're interested in checking out Bless Online's first closed beta, you are certainly not alone. Over 100,000 signed up in just four days -- and that's just those who meet the regional requirements! Of course, the lottery recruitment for beta slots -- open to Korean residents 19 years of age or older -- continues this whole week, so that number is expected to rise. It makes you think: Just how high would it soar if fans in the West were able to apply?

        MassivelyBless Online's Korean beta hits 100,000 sign-ups in four days originally appeared on Massively on Mon, 10 Feb 2014 10:00:00 EST. Please see our terms for use of feeds.

        Permalink | Email this | Comments

        The Stream Team: Hearts aflutter edition, February 10 - 16, 2014

        Posted: 10 Feb 2014 06:00 AM PST

        Filed under: , , , , ,

        It's that time of year again: the time for in-game festivals and real-world chocolate! For MMO enthusiasts, the middle of February heralds the chance to load up on seasonal goodies in various in-game celebrations. It doesn't matter that the majority of things are pink, red, or shaped like a heart -- it's still limited-time loot! And unlike the aforementioned chocolate, and overindulgence won't appear on your waistline, so jump on in and nab as much as you can.

        While your at it, make sure you spend some quality time with those special folks in your life who are always there for you -- like The Stream Team! Tune in to Massively TV and watch us doing what we love: showcasing the games you love (or love to hate) all week long.

        Continue reading The Stream Team: Hearts aflutter edition, February 10 - 16, 2014

        MassivelyThe Stream Team: Hearts aflutter edition, February 10 - 16, 2014 originally appeared on Massively on Mon, 10 Feb 2014 09:00:00 EST. Please see our terms for use of feeds.

        Permalink | Email this | Comments

          The Daily Grind: Do you enter MMO design contests?

          Posted: 10 Feb 2014 05:00 AM PST

          Filed under: , , ,

          Star Citizen Next Great Starship logo
          The Star Citizen community is in the midst of coming up with some pretty amazing designs for the game's Next Great Starship contest.

          Cloud Imperium's space sim isn't the first MMO to offer players a chance to get their creations in game, though. EVE Online has hosted a similar contest in the past, and SOE is of course collecting a ton of community-made assets both in Landmark and via its Player Studio initiative, so it's clear that content crowdsourcing is here to stay in one form or another.

          What's less clear is how the community feels about it, and how many members of said community participate in it. Fortunately we have The Daily Grind, in all its completely unofficial and unscientific polling glory. What say you, Massively readers? Do you enter MMO design contests? Have you ever won?

          Every morning, the Massively bloggers probe the minds of their readers with deep, thought-provoking questions about that most serious of topics: massively online gaming. We crave your opinions, so grab your caffeinated beverage of choice and chime in on today's Daily Grind!

          MassivelyThe Daily Grind: Do you enter MMO design contests? originally appeared on Massively on Mon, 10 Feb 2014 08:00:00 EST. Please see our terms for use of feeds.

          Permalink | Email this | Comments

            MMO Week in Review: New shinies

            Posted: 09 Feb 2014 05:00 PM PST

            Filed under: , ,

            ESO
            At the end of every week, we round up the best and most popular news stories, exclusive features, and insightful columns published on Massively and then present them all in one convenient place. If you missed a big MMO or WoW Insider story last week, you've come to the right post.

            February is not known for being a huge MMO news month, but 2014 is all about writing new rules for the genre. This week, The Elder Scrolls Online granted a massive press NDA lift, provoking an influx of impressions and guides for the earliest stages of the game. EverQuest Next Landmark survived its first week in open alpha, and both Guild Wars 2 and Star Wars: The Old Republic posted game updates in an attempt to distract everyone from the new shinies.

            All this and more await you in today's roundup of Massively's top MMO content. Read on!

            Continue reading MMO Week in Review: New shinies

            MassivelyMMO Week in Review: New shinies originally appeared on Massively on Sun, 09 Feb 2014 20:00:00 EST. Please see our terms for use of feeds.

            Permalink | Email this | Comments

              Pantheon reveals class/race combos

              Posted: 09 Feb 2014 04:00 PM PST

              Filed under: , , , , , ,

              Pantheon
              If you're pulling for Pantheon, then you might be daydreaming about what class/race combination you'd want to play if the game is made. Visionary Realms hears you (it's inside of your head), and it released a chart showing the classes available for each race.

              Some of Pantheon's races are at an obvious disadvantage when it comes to the total number of available classes at the moment, as Humans have six while Dwarves and Ogres have three apiece. However, Ogres do get an exclusive class -- the Shaman -- and can use that to boost self-esteem if needed.

              The Kickstarter project also revealed a new stretch goal: the Halfling race. Halflings will be added to the game alongside Gnomes if Pantheon races $2 million or more.

              [Thanks to Josh for the tip!]

              MassivelyPantheon reveals class/race combos originally appeared on Massively on Sun, 09 Feb 2014 19:00:00 EST. Please see our terms for use of feeds.

              Permalink | Email this | Comments

              EVE Evolved: The top five most dangerous solar systems

              Posted: 09 Feb 2014 03:00 PM PST

              Filed under: , , , , , , , , , , , , , ,

              EVE Evolved title image
              EVE Online is a PvP game at its core, with conflict built in at a fundamental level. Pirates lurk around key trade routes and stand ready to pounce on unsuspecting victims, while vast nullsec alliances protect their territories with watchful vigilance and never-ending bloodlust. Wander into the wrong solar system as a new player and your precious ship and cargo will be turned into molten slag and a few points on a killboard quicker than you can say, "Hello, new friend, and what does that red square on your ship mean?"

              The original map of EVE was generated one evening by an Icelandic developer who could scarcely have known he was deciding the fates of thousands of gamers for years to come. New systems have been added to the game over the years, and a few manual changes have been made to the stargate network, but most of the universe has remained the same for over a decade. In all that time, a few solar systems have stood out as brazen bastions of bastardly behaviour and made their marks on EVE's history.

              In this week's EVE Evolved, I run down a list of the top five most dangerous solar systems in EVE's long history and delve into why each has earned its reputation as a no-fly-zone for newbies.

              Continue reading EVE Evolved: The top five most dangerous solar systems

              MassivelyEVE Evolved: The top five most dangerous solar systems originally appeared on Massively on Sun, 09 Feb 2014 18:00:00 EST. Please see our terms for use of feeds.

              Permalink | Email this | Comments

                Gloria Victus needs your Steam votes

                Posted: 09 Feb 2014 02:00 PM PST

                Filed under: , , ,

                Gloria Victus
                After its first day on Steam Greenlight, Gloria Victus is already in the top 100 and is hoping for an even better showing. Black Eye Games is heavily promoting the indie medieval MMO as it goes through the Greenlight gauntlet with the eventual goal of being offered through Steam's digital platform.

                Steam users can choose whether or not to vote for Gloria Victus to be greenlit. Currently the game is in pre-alpha testing with regular patches and expects to lift its NDA in the next few weeks.

                Gloria Victis features sandbox crafting, plenty of PvP, non-targeted combat, a fuedal social system, and a harsh world. Its storyline is being written by The Witcher's Jacek Komuda and Maciej Jurewicz.

                MassivelyGloria Victus needs your Steam votes originally appeared on Massively on Sun, 09 Feb 2014 17:00:00 EST. Please see our terms for use of feeds.

                Permalink | Email this | Comments

                Stick and Rudder: OK, so Star Citizen might be a PvP game

                Posted: 09 Feb 2014 11:00 AM PST

                Filed under: , , , , , , , , , , , ,

                Stick and Rudder - Star Citizen cockpit
                A few weeks ago I outlined why I think Star Citizen is more a PvE title than a PvP title. I'm sure most of you disagreed, so this week I'd like to examine the other side of the debate.

                Sorta.

                See, I still think SC is mostly for PvE types, given Chris Roberts' design sensibilities, but I also went back and studied the Roberts PvP quote highlighted in the previous piece as well as the full wall o' text that surrounded it. Roberts, according to that interview, believes that SC will be both a PvE and a PvP game. Fair enough. We often hear devs speaking grandly in the pre-alpha stages of a project and swearing up and down that it's going to make everyone happy.

                Can it really, though?

                Continue reading Stick and Rudder: OK, so Star Citizen might be a PvP game

                MassivelyStick and Rudder: OK, so Star Citizen might be a PvP game originally appeared on Massively on Sun, 09 Feb 2014 14:00:00 EST. Please see our terms for use of feeds.

                Permalink | Email this | Comments

                  Total Pageviews

                  statcounter

                  View My Stats