General Gaming Article |
- Best Cheap Graphics Card
- Nvidia Tegra K1 Claims Fame as First 64-Bit ARM Chip for Android
- Gartner Predicts 5.2 Million Chromebook Sales in 2014
- Maingear's Nvidia Battlebox Titan Z PC Line Starts at $2,999
- Alienware Alpha PC Game Console is Now Available to Pre-Order in the U.S.
- Lenovo Launches Professional Grade ThinkStation P Series Workstations
- Newegg Daily Deals: Asus Z97-C Motherboard, G.Skill Ares Series 8GB (2x4GB) DDR3-1600, and More!
Posted: 12 Aug 2014 02:43 PM PDT Six entry-level cards battle for budget-board bragging rightsThe video-card game is a lot like Hollywood. Movies like My Left Foot and The Artist take home the Oscars every year, but movies like Grown Ups 2 and Transformers 3 pull in all the cash. It's the same with GPUs, in that everyone loves to talk about $1,000 cards, but the actual bread-and-butter of the market is made up of models that cost between $100 and $150. These are not GPUs for 4K gaming, obviously, but they can provide a surprisingly pleasant 1080p gaming experience, and run cool and quiet, too. This arena has been so hot that AMD and Nvidia have recently released no fewer than six cards aimed at budget buyers. Four of these cards are from AMD, and Nvidia launched two models care of its all-new Maxwell architecture, so we decided to pit them against one another in an old-fashioned GPU roundup. All of these cards use either a single six-pin PCIe connector or none at all, so you don't even need a burly power supply to run them, just a little bit of scratch and the desire to get your game on. Let's dive in and see who rules the roost! Nvidia's Maxwell changes the gameBudget GPUs have always been low-power components, and usually need just a single six-pin PCIe power connector to run them. After all, a budget GPU goes into a budget build, and those PCs typically don't come with the 600W-or-higher power supplies that provide dual six- or eight-pin PCIe connectors. Since many budget PSUs done have PCIe connectors, most of these cards come with Molex adapters in case you don't have one. The typical thermal design power (TDP) of these cards is around 110 watts or so, but that number fluctuates up and down according to spec. For comparison, the Radeon R9 290X has a TDP of roughly 300 watts, and Nvidia's flagship card, the GTX 780 Ti, has a TDP of 250W, so these budget cards don't have a lot of juice to work with. Therefore, efficiency is key, as the GPUs need to make the most out of the teeny, tiny bit of wattage they are allotted. During 2013, we saw AMD and Nvidia release GPUs based on all-new 28nm architectures named GCN and Kepler, respectively, and though Nvidia held a decisive advantage in the efficiency battle, it's taken things to the next level with its new ultra-low-power Maxwell GPUs that were released in February 2014. Beginning with the GTX 750 Ti and the GTX 750, Nvidia is embarking on a whole new course for its GPUs, centered around maximum power efficiency. The goal with its former Kepler architecture was to have better performance per watt compared to the previous architecture named Fermi, and it succeeded, but it's taken that same philosophy even further with Maxwell, which had as its goal to be twice as efficient as Kepler while providing 25 percent more performance. Maxwell offers far greater power savings by using more granular clock gating, which allows it to shut down unused graphics units. Achieving more performance for the same model or SKU from one generation to the next is a tough enough challenge, but to do so by cutting power consumption in half is an even trickier gambit, especially considering the Maxwell GPUs are being fabricated on the same 28nm process it used for Kepler. We always expect more performance for less power when moving from one process to the next, such as 32nm to 28nm or 22nm to 14nm, but to do so on the same process is an amazing achievement indeed. Though Nvidia used many technological advances to reduce power consumption, the main structural change was to how the individual CUDA cores inside the Graphics Processing Clusters (GPCs) are organized and controlled. In Kepler, each GPC contained individual processing units, named SMX units, and each unit featured a piece of control logic that handled scheduling for 192 CUDA cores, which was a major increase from the 32 cores in each block found in Fermi. In Maxwell, Nvidia has gone back to 32 CUDA cores per block, but is putting four blocks into each unit, which are now called SM units. If you're confused, the simple version is this—rather than one piece of logic controlling 192 cores, Maxwell has a piece of logic for each cluster of 32 cores, and there are four clusters per unit, for a total of 128 cores per block. Therefore, it's reduced the number of cores per block by 64, from 192 to 128, which helps save energy. Also, since each piece of control logic only has to pay attention to 32 cores instead of 192, it can run them more efficiently, which also saves energy. The benefit to all this energy-saving is the GTX 750 cards don't need external power, so they can be dropped into pretty much any PC on the market without upgrading the power supply. That makes it a great upgrade for any pre-built POS you have lying around the house. Gigabyte GTX 750 Ti WindForceNvidia's new Maxwell cards run surprisingly cool and quiet in stock trim, and that's with a fan no larger than an oversized Ritz cracker, so you can guess what happens when you throw a mid-sized WindForce cooler onto one of them. Yep, it's so quiet and cool you have to check with your fingers to see if it's even running. This bad boy ran at 45 C under load, making it the coolest-running card we've ever tested, so kudos to Nvidia and Gigabyte on holding it down (the temps, that is). This board comes off the factory line with a very mild overclock of just 13MHz (why even bother, seriously), and its boost clock has been massaged up to 1,111MHz from 1,085MHz, but as always, this is just a starting point for your overclocking adventures. The memory is kept at reference speeds however, running at 5,400MHz. The board sports 2GB of GDDR5 memory, and uses a custom design for its blue-colored PCB. It features two 80mm fans and an 8mm copper heat pipe. Most interesting is the board requires a six-pin PCIe connector, unlike the reference design, which does not. The WindForce cooler is overkill, but we like it that way. In testing, the GTX 750 Ti WindForce was neck-and-neck with the Nvidia reference design, proving that Nvidia did a pretty good job with this card, and that its cooling requirements don't really warrant such an outlandish cooler. Still, we'll take it, and we loved that it was totally silent at all times. Overclocking potential is higher, of course, but since the reference design overclocked to 1,270MHz or so, we don't think you should expect moon-shot overclocking records. Still, this card was rock solid, whisper quiet, and extremely cool. Gigabyte GTX 750 Ti WindForce $160(Street), www.gigabyte.us MSI GeForce GTX 750 GamingMuch like Gigabyte's GTX 750 Ti WindForce card, the MSI GTX 750 Gaming is a low-power board with a massive Twin Frozr cooler attached to it for truly exceptional cooling performance. The only downside is the formerly waifish GPU has been transformed into a full-size card, measuring more than nine inches long. Unlike the Gigabyte card though, this GPU eschews the six-pin PCIe connector, as it's just a 55W board, and since the PCIe slot delivers up to 75W, it doesn't even need the juice. Despite this card's entry-level billing, MSI has fitted it with "military-class" components for better overclocking and improved stability. It uses twin heat pipes to dual 100mm fans to keep it cool, as well. It also includes a switch that lets you toggle between booting from an older BIOS in case you run into overclocking issues. MSI's Twin Frozr cooling apparatus transforms this svelte GPU into a full-sized card. Speaking of which, this board lives up to its name and has a beefy overclock right out of the box, running at 1,085MHz base clock and 1,163MHz boost clock. It features 1GB of GDDR5 RAM on a 128-bit interface. The Twin Frozr cooler handles the miniscule amount of heat coming out of this board with aplomb—we were able to press our finger forcibly on the heatsink under load and felt almost no warmth, sort of like when we give Gordon a hug when he arrives at the office. As the only GTX 750 in this test, it showed it could run our entire test suite at decent frame rates, but it traded barbs with the slightly less expensive Radeon R7 260X. On paper, both the GTX 750 and the R7 260X are about $119, but rising prices from either increased demand or low supply have placed both cards in the $150 range, making it a dead heat. Still, it's a very good option for those who want an Nvidia GPU and its ecosystem but can't afford the Ti model. MSI GeForce GTX 750 Gaming $140, www.msi.com Sapphire Radeon R7 265 Dual-XThe Sapphire Radeon R7 265 is the odds-on favorite in this roundup, due to its impressive specs and the fact that it consumes more than twice the power of the Nvidia cards. Sure, it's an unfair advantage, but hate the game, not the player. This board is essentially a rebadged Radeon HD 7850, which is a Pitcairn part, and it slides right in between the $120 R7 260X and the $180ish R7 270. This card actually has the same clock speeds as the R7 270, but features fewer streaming processors for reduced shader performance. It has the same 2GB of memory, same 925MHz boost clock, same 256-bit memory bus, and so on. At 150W, its TDP is very high—or at least it seems high, given that the GTX 750 Ti costs the exact same $150 and is sitting at just 60W. Unlike the lower-priced R7 260X Bonaire part, though, the R7 265 is older silicon and thus does not support TrueAudio and XDMA CrossFire (bridgeless CrossFire, basically). However, it will support the Mantle API, someday. Sapphire's R7 265 is the third card in this roundup to use a two-fan cooling apparatus. The Sapphire card delivered the goods in testing, boasting top scores in many benchmarks and coming in as the only GPU in this roundup to hit the magical 60fps in any test, which was a blistering turn in Call of Duty: Ghosts where it hit 67fps at 1080p on Ultra settings. That's damned impressive, as was its ability to run at 49fps in Battlefield 4, though the GTX 750 Ti was just a few frames behind it. Overall, though, this card cleaned up, taking first place in seven out of nine benchmarks. If that isn't a Kick Ass performance, we don't know what is. The Dual-X cooler also kept temps and noise in check, too, making this the go-to GPU for those with small boxes or small monitors. Sapphire Radeon R7 265 Dual-X $150 (MSRP), www.sapphiretech.com AMD Radeon R7 260XThe Radeon R7 260X was originally AMD's go-to card for 1080p gaming on a budget. It's the only card in the company's sub-$200 lineup that supports all the next-gen features that appeared in its Hawaii-based flagship boards, including support for TrueAudio, XDMA Crossfire, Mantle (as in, it worked at launch), and it has the ability to drive up to three displays —all from this tiny $120 GPU. Not bad. In its previous life, this GPU was known as the Radeon HD 7790, aka Bonaire, and it was our favorite "budget" GPU when pitted against the Nvidia GTX 650 Ti Boost due to its decent performance and amazing at-the-time game bundles. It features a 128-bit memory bus, 896 Stream Processors, 2GB of RAM (up from 1GB on the previous card), and a healthy boost clock of 1,100MHz. TDP is just 115W, so it slots right in between the Nvidia cards and the higher-end R7 265 board. Essentially, this is an HD 7790 card with 1GB more RAM, and support for TrueAudio, which we have yet to experience. This $120 card supports Mantle, TrueAudio, and CrossFire. In testing, the R7 260X delivered passable performance, staking out the middle ground between the faster R7 265 and the much slower R7 250 cards. It ran at about 30fps in tests like Crysis 3 and Tomb Raider, but hit 51fps on CoD: Ghosts and 40fps on Battlefield 4, so it's certainly got enough horsepower to run the latest games on max settings. The fact that it supports all the latest technology from AMD is what bolsters this card's credentials, though. And the fact that it can run Mantle with no problems is a big plus for Battlefield 4 players. We like this card a lot, just like we enjoyed the HD 7790. While it's not the fastest card in the bunch, it's certainly far from the slowest. AMD Radeon R7 260X $120 www.amd.com |
MSI Geforce GTX 750 Gaming | GigaByte GeForce GTX 750 Ti | GeForce GTX 650 Ti Boost * | GeForce GTX 660 * | MSI Radeon R7 250 | PowerColor Radeon R7 250X | AMD Radeon R7 260X | Sapphire Radeon R7 265 | |
Price | $120 | $150 | $160 | $210 | $90 | $100 | $120 | $150 |
Code-name | Maxwell | Maxwell | Kepler | Kepler | Oland | Cape Verde | Bonaire | Curaco |
Processing cores | 512 | 640 | 768 | 960 | 384 | 640 | 896 | 1,024 |
ROP units | 16 | 16 | 24 | 24 | 8 | 16 | 16 | 32 |
Texture units | 32 | 40 | 64 | 80 | 24 | 40 | 56 | 64 |
Memory | 2GB | 2GB | 2GB | 2GB | 1GB | 1GB | 2GB | 2GB |
Memory speed | 1,350MHz | 1,350MHz | 1,500MHz | 1,500MHz | 1,500MHz | 1,125MHz | 1,500MHz | 1,400MHz |
Memory bus | 128-bit | 128-bit | 192-bit | 192-bit | 128-bit | 128-bit | 128-bit | 256-bit |
Base clock | 1,020MHz | 1,020MHz | 980MHz | 980MHz | N/A | N/A | N/A | N/A |
Boost clock | 1,085MHz | 1,085MHz | 1,033MHz | 1,033MHz | 1,050MHz | 1,000MHz | 1,000MHz | 925MHz |
PCI Express version | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
Transistor count | 1.87 billion | 1.87 billion | 2.54 billion | 2.54 billion | 1.04 billion | 1.04 billion | 2.08 billion | 2.8 billion |
Power connectors | N/A | N/A | 1x six-pin | 1x six-pin | N/A | 1x six-pin | 1x six-pin | 1x six-pin |
TDP | 54W | 60W | 134W | 140W | 65W | 80W | 115W | 150W |
Fab process | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
Multi-card support | No | No | Yes | Yes | No | Yes | Yes | Yes |
Outputs | DVI, VGA, HDMI | 2x DVI, 2x HDMI | 2x DVI, HDMI, DisplayPort | 2x DVI, HDMI, DisplayPort | DVI-S, VGA, HDMI | DVI, VGA, HDMI | 2x DVI, HDMI, DisplayPort | 2x DVI, HDMI, DisplayPort |
Provided for reference purposes.
How we tested
We lowered our requirements, but not too much
We normally test all of our video cards on our standardized test bed, which has now been in operation for a year and a half, with only a few changes along the way. In fact, the only major change we've made to it in the last year was swapping the X79 motherboard and case. The motherboard had endured several hundred video-card insertions, which is well beyond the design specs. The case had also become bent to the point where the video cards were drooping slightly. Some, shall we say, "overzealous" overclocking also caused the motherboard to begin behaving unpredictably. Regardless, it's a top-tier rig with an Intel Core i7-3960X Extreme processor, 16GB of DDR3 memory, an Asus Rampage IV Extreme motherboard, Crucial M500 SSD, and Windows 8 64-bit Enterprise.
For the AMD video cards, we loaded Catalyst driver 14.1 Beta 1.6, as that was the latest driver, and for the Nvidia cards, we used the 334.89 WHQL driver that was released just before testing began. We originally planned to run the cards at our normal "midrange GPU" settings, which is 1920x1080 resolution with maximum settings and 4X AA enabled, but after testing began, we realized we needed to back off those settings just a tad. Instead of dialing it down to medium settings, though, as that would run counter to everything we stand for as a magazine, we left the settings on "high" across the board, but disabled AA. These settings were a bit much for the lower-end cards, but rather than lower our settings once again, we decided to stand fast at 1080p with high settings, since we figured that's where you want to be gaming and you deserve to know if some of the less-expensive cards can handle that type of action.
Mantle Reviewed
A word about AMD's Mantle API
AMD's Mantle API is a competitor to DirectX, optimized specifically for AMD's GCN hardware. In theory, it should allow for better performance since its code knows exactly what hardware it's talking to, as opposed to DirectX's "works with any card" approach. The Mantle API should be able to give all GCN 1.0 and later AMD cards quite a boost in games that support it. However, AMD points out that Mantle will only show benefits in scenarios that are CPU-bound, not GPU-bound, so if your GPU is already working as hard as it can, Mantle isn't going to help it. However, if your GPU is always waiting around for instructions from an overloaded CPU, then Mantle can offer respectable gains.
To test it out, we ran Battlefield 4 on an older Ivy Bridge quad-core, non-Hyper-Threaded Core i5-3470 test bench with the R7 260X GPU at 1920x1080 and 4X AA enabled. As of press time, there are only two games that support Mantle—Battlefield 4 and an RTS demo on Steam named Star Swarm. In Battlefield 4, we were able to achieve 36fps using DirectX, and 44fps using Mantle, which is a healthy increase and a very respectable showing for a $120 video card. The benefit was much smaller in Star Swarm, however, showing a negligible increase of just two frames per second.
We then moved to a much beefier test bench running a six-core, Hyper-Threaded Core i7-3960X and a Radeon R9 290X, and we saw an increase in Star Swarm of 100 percent, going from 25fps with DirectX to 51fps using Mantle in a timed demo. We got a decent bump in Battlefield 4, too, going from 84 fps using DirectX to 98 fps in Mantle.
Overall, Mantle is legit, but it's kind of like PhysX or TressFX in that it's nice to have when it's supported, and does provide a boost, but it isn't something we'd count on being available in most games.
Final Thoughts
If cost is an issue, you've got options
Testing the cards for this feature was an enlightening experience. We don't usually dabble in GPU waters that are this shallow, so we really had no idea what to expect from all the cards assembled. To be honest, if we were given a shot of sodium pentothal, we'd have to admit that given these cards' price points, we had low expectations but thought they'd all at least be able to handle 1920x1080 gaming. As spoiled gamers used to running 2K or 4K resolution, 1080p seems like child's play to us. But we found out that running that resolution at maximum settings is a bridge too far for any GPU that costs less than $120 or so. The $150 models are the sweet spot, though, and are able to game extremely well at 1080 resolution, meaning the barrier to entry for "sweet gaming" has been lowered by $100, thanks to these new GPUs from AMD and Nvidia.
Therefore, the summary of our results is that if you have $150 to spend on a GPU, you should buy the Sapphire Radeon R7 265, as it's the best card for gaming at this price point, end of discussion. OK, thanks for reading.
Oh, are you still here? OK, here's some more detail. In our testing, the Sapphire R7 265 was hands-down the fastest GPU at its price point—by a non-trivial margin in many tests—and is superior to the GTX 750 Ti from Nvidia. It was also the only GPU to come close to the magical 60fps we desire in every game, making it pretty much the only card in this crowd that came close to satisfying our sky-high demands. The Nvidia GTX 750 Ti card was a close second, though, and provides a totally passable experience at 1080p with all settings maxed. Nvidia's trump card is that it consumes less than half the power of the R7 265 and runs 10 C cooler, but we doubt most gamers will care except in severely PSU-constrained systems.
Moving down one notch to the $120 cards, the GTX 750 and R7 260X trade blows quite well, so there's no clear winner. Pick your ecosystem and get your game on, because these cards are totally decent, and delivered playable frame rates in every test we ran.
The bottom rung of cards, which consists of the R7 250(X) cards, were not playable at 1080p at max settings, so avoid them. They are probably good for 1680x1050 gaming at medium settings or something in that ballpark, but in our world, that is a no-man's land filled with shattered dreams and sad pixels.
Nvidia GTX 750 Ti (reference) | Gigabyte GTX 750 Ti | MSI GTX 750 Gaming | Sapphire Radeon R7 265 | AMD Radeon R7 260X | PowerColor Radeon R7 250X | MSI Radeon R7 250 OC | |
Driver | 334.89 | 334.89 | 334.89 | 14.1 v1.6 | 14.1 v1.6 | 14.1 v1.6 | 14.1 v1.6 |
3DMark Fire Storm | 3,960 | 3,974 | 3,558 | 4,686 | 3,832 | 2,806 | 1,524 |
Unigine Heaven 4.0 (fps) | 30 | 30 | 25 | 29 | 23 | 17 | 9 |
Crysis 3 (fps) | 27 | 25 | 21 | 32 | 26 | 16 | 10 |
Far Cry 3 (fps) | 40 | 40 | 34 | 40 | 34 | 16 | 14 |
Tomb Raider (fps) | 30 | 30 | 26 | 36 | 31 | 20 | 12 |
CoD: Ghosts (fps) | 51 | 49 | 42 | 67 | 51 | 28 | 22 |
Battlefield 4 (fps) | 45 | 45 | 32 | 49 | 40 | 27 | 14 |
Batman: Arkham Origins (fps) | 74 | 71 | 61 | 55 | 43 | 34 | 18 |
Assassin's Creed: Black Flag (fps) | 33 | 33 | 29 | 39 | 21 | 21 | 14 |
Best scores are bolded. Our test bed is a 3.33GHz Core i7-3960X Extreme Edition in an Asus Rampage IV Extreme motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 8. All games are run at 1920x1080 with no AA except for the 3DMark tests.
Nvidia Tegra K1 Claims Fame as First 64-Bit ARM Chip for Android
Posted: 12 Aug 2014 10:32 AM PDT
Android enters the 64-bit ARM era
Say hello to "Denver," the codename for Nvidia's 64-bit Tegra K1 System-on-Chip (SoC), which also happens to be the first 64-bit ARM processor for Android. The new version of Nvidia's Tegra K1 SoC pairs the company's Kepler architecture-based GPU with its own custom-designed, 64-bit, dual-core "Project Denver" CPU, which Nvidia says is fully ARMv8 architecture compatible.
So, what's special about this chip besides a 64-bit instruction set? Nvidia designed Denver to offer the highest single-core CPU throughput and industry-leading dual-core performance. Each Denver core (and there are two) sports a 7-way superscaler microarchitecture and includes a 128KB 4-way L1 instruction cache, a 64KB 4-way L1 data cache, and a 2MB 16-way L2 cache that services both cores.
Using a process called Dynamic Code Optimization, Denver optimizes frequently used software routines at runtime into dense, highly tuned microcode-equivalent routines stored in a dedicated 128MB main-memory based optimization cache. This allows for faster access and execution, which translates into faster performance, in part because it lessens the need to re-optimize the software routine.
Denver will also benefit Android platforms with new low latency power-state transitions. This is in addition to extensive power-gating and dynamic voltage and clock scaling routines based on workloads. The end result is more efficient power usage, which allows Denver's performance to rival even some mainstream PC-class CPUs at significantly reduced power consumption, Nvidia says.
If you want to dig even further into the architecture, you can get more details here.
Gartner Predicts 5.2 Million Chromebook Sales in 2014
Posted: 12 Aug 2014 10:00 AM PDT
Chromebooks are big in the education sector
We've pointed out before how Chromebooks are some of the best selling laptops on Amazon, and though these cloud-based systems aren't as capable as their Windows-based counterparts, they've having no trouble finding an audience, particularly in education circles. In fact, market research firm Gartner forecasts 5.2 million Chromebook sales by the end of the year, which would translate into a 79 percent jump compared to 2013.
That's just the tip of the cloud -- by 2017, Gartner sees Chromebook sales tripling to 14.4 million units. Not too shabby for what some power users a glorified web browsing machine, though to be fair, Chromebooks are capable of much more than just surfing the Internet.
"Competition in the Chromebook market is intensifying as more vendors launch Chromebooks, with eight models in the market in 2014," said Isabelle Durand, principal analyst at Gartner. "Now that the PC market is no longer growing strongly, vendors are searching for new business opportunities. They launched Chromebooks to revive interest in sub-$300 portable PCs once the netbook bubble had burst."
Unlike netbooks, however, Chromebooks are most predominate in the education sector, which accounted for nearly 85 percent of Chromebook sales last year. And of the 2.9 million Chromebooks sold during 2013, 82 percent were sold in North America, Gartner says.
Despite just 15 percent of Chromebooks sold in 2013 landing outside the education sector, Gartner insists they will carve out a place in businesses for specific workers, such as staff in banking, hotel receptions, and so forth.
"So far, businesses have looked at Chromebooks, but not bought many," Durand added. "By adopting Chromebooks and cloud computing, businesses can benefit; they can shift their focus from managing devices to managing something much more important — their data."
Maingear's Nvidia Battlebox Titan Z PC Line Starts at $2,999
Posted: 12 Aug 2014 09:04 AM PDT
Undercutting the competition
Nvidia has partnered with various system builders to equip their current rigs with at least one GeForce Titan Z graphics card and rebrand them as Battleboxes, Maingear being one of them. Unlike ones we've already seen, however, Maingear's Battlebox Titan Z PCs are a little more cost friendly, with the least expensive model (Vybe) starting at $2,999. Battlebox configurations are also available on Maingear's F131 and Shift starting at $3,199 and $3,499, respectively.
"The new Maingear Battlebox Titan Z PCs offer a great value with optimal performance with the Titan Z card, if someone has been waiting to get PC their Titan Z pc, any of these PC are great options," said Wallace Santos, CEO and founder of Maingear.
As the Titan Z is capable of gaming at a 4K resolution, you can add an Asus 28-inch 4K monitor to each configuration. Doing so would bring the starting price up to $3,698 for the Vybe, $3,898 for the F131, and $4,198 for the Shift. With or without the monitor, each system comes with an extended two year warranty.
Bear in mind that these are starting prices. Plenty of upgrades are available for each model. You can check them all out here.
Alienware Alpha PC Game Console is Now Available to Pre-Order in the U.S.
Posted: 12 Aug 2014 07:49 AM PDT
Configurations start at $549
In an alternate reality, Dell is launching its Alienware Alpha PC game console as its official Steam Machine. In this world, however, Valve threw a wrench into every OEM's plans by delaying the launch of its Steam Machine platform until next year, as it wanted more time to tweak its Steam Controller. Fair enough, though not all system builders are willing to put their PC console plans on hold. Enter Alienware, which today announced it's taking pre-orders for its Alpha console.
"Gamers can now secure their gateway to the entire Steam library on a system that was engineered to provide an immersive PC gaming experience, custom-tailored for the living room," Alienware said. "The Alienware Alpha merges the open ecosystem and flexibility of PC gaming with the ease-of-use and intuitive interface of consoles. This enables gamers to choose what and how they want to play, whether it's a competitive online FPS with the bundled Microsoft Xbox 360 wireless controller for Windows, or having their friends bring their controllers of choice for a fragfest in the newest indie side scroller."
Systems start at $549, though you can configure a higher priced machine with more potent hardware. That includes the availability of Intel Core i7 processor options, 8GB of RAM, 1TB of storage, 802.11ac Wi-Fi, and Nvidia Maxwell-based GPU options. All systems come with a Microsoft Xbox 360 controller to boot.
As an added bonus, Alienware Alpha machines will also come bundled with Payday 2, Magicka, Magicka: Dungeons and Daemons DLC, and Gauntlet Helm.
Alienware Alpha systems will ship in November, just as previously promised.
Lenovo Launches Professional Grade ThinkStation P Series Workstations
Posted: 12 Aug 2014 06:16 AM PDT
Rebuilt from the ground up
Lenovo is using its time at SIGGRAPH 2014 in Vancouver, British Columbia to launch its new ThinkStation P Series of desktop workstations. The new Lenovo P900, P700, and P500 workstations join the entry-level ThinkStation P300 announced in May of this year and have been completely retooled compared to Lenovo's previous generation of workstations. According to Victor Rios, Vice President and General Manager of Lenovo's Workstation division, these are the "best designed" workstations the company has ever built.
"We built on the advancements delivered in our 30 Series workstations and engineered the P Series with new levels of innovation based on extensive customer feedback," Rios said in a statement. "The result for users is optimum performance, outstanding reliability, and unparalleled usability that is unlike anything they have experienced."
The P-Series boasts tool-less upgrades, intuitive red touch points that guide users' hands to quick and easy component changes, integrated handles, QR codes, and a diagnostic USB port that allows you to plug in an Android-based tablet or smartphone for system analysis.
On the ultra high-end, the ThinkStation P900 comes with a 1300W power supply, Intel Xeon processor options with support for dual CPUs, up to 512GB of DDR4 memory spread across 16 DIMM slots, up to 14 storage devices (8 internal + 2 external + 4 M.2), up to four GPUs, support for 12Gbps SAS and up to 32Gbps PCIe, 10 expansion slots, Firewire, four USB 3.0 ports, a serial port, up to a 29-in-1 card reader, RAID support, and dual GbE ports
The P700 Series is slightly subdued compared to the P900 with up to an 850W PSU, up to 384GB of DDR4 memory (12 DIMM slots), up to 12 storage devices, up to 3 GPUs, and 7 expansion slots.
Slightly lower is the P500, which is similar to the P700 but with 8 DIMM slots supporting up to 256GB of DDR4 memory, single Xeon processor support, room for 11 storage storage devices, support for two GPUs, and a single GbE port.
Lenovo says its new P Series systems will be available later this fall. No word yet on price.
Newegg Daily Deals: Asus Z97-C Motherboard, G.Skill Ares Series 8GB (2x4GB) DDR3-1600, and More!
Posted: 12 Aug 2014 06:11 AM PDT
Top Deal:
Talk is cheap, and if you've been talking about building a new system for several months now, you'll have to bite the bullet and part with some green, though perhaps not as much as you think. To build a respectable system without spending a fortune, keep checking Newegg's Daily Deals, including today's for an Asus Z97-C LGA 1150 Motherboard for $120 with free shipping (normally $140 - use coupon code: [EMCPBPC38]). It has four DIMM slots with support for up to 32GB of DDR3-3200 (OC, of course), four SATA 6Gbps ports, a single M.2 port, USB 3.0 connectivity, and plenty more features.
Other Deals:
G.Skill Ares Series 8GB (2 x 4GB) 240-Pin DDR3 1600 (PC3 12800) Desktop Memory for $77 with free shipping (normally $85 - use coupon code: [EMCPBPC42])
LG 27-inch 5ms HDMI Widescreen LED Backlight LCD Monitor IPS 200 for $200 with free shipping
OCZ Technology 120GB Vertex 460 2.5-inch SSD for $80 with free shipping
Intel Core i5-3570K Ivy Bridge Quad-Core 3.4GHz (3.8GHz Turbo) LGA 1155 77W Desktop Processor for $230 with free shipping
You are subscribed to email updates from Maximum PC - All Articles To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |