The world's smallest-process consumer CPU comes out in a big way
You are, no doubt, quite familiar with Intel's CPU-release "cadence" of tick-tock by now. If not, the short story is that every tock brings a major breakthrough, while ticks are decent upgrades but nothing to Twitter home about.
That's not necessarily the case with Intel's latest tick, the Ivy Bridge CPU. Sure, the performance enhancements on the x86 side of the aisle won't exactly knock you on your tuchus, but they're still decent. The upgrades to the graphics core, however, make Ivy Bridge more noteworthy.
As we know, Intel found religion through graphics and has been progressively improving on it ever since. The Clarksfield CPUs moved graphics directly into the CPU package, and Sandy Bridge CPUs moved graphics directly onto the CPU die itself. With Ivy Bridge, Intel says it outdid itself by doubling the graphics performance of Sandy Bridge.
If you're ready to write off Ivy Bridge as an incremental chip that you, the enthusiast, doesn't give a damn about, you're wrong. There's a lot more to Ivy Bridge that makes it the default CPU for an enthusiast who doesn't want to jump into the bigger, pricier LGA2011 socket. Don't believe us? Read on to find out why you want this CPU instead of Sandy Bridge.
Meet the Ivy Bridge Lineup
Click to enlarge
World's First Chip with '3D' Transistors
Despite its revolutionary tri-gate design, Ivy Bridge doesn't do much to advance x86
We've long dubbed Intel the "Master of the Fab." The company's prowess in chip fabrication is the envy of the world. Yeah, there was that little thing with the Pentium 4, which hit the process wall like a freight train, but for the most part, Intel's mastery of chip fabrication has always made its new CPUs a tour de force of technology that makes you wonder if the company doesn't have a crashed flying saucer hidden at 2200 Mission College Boulevard.
With Ivy Bridge, Intel again amazes with the world's first use of tri-gate, or 3D, transistors. Also called finFETs, for fin field-effect transistors, the 3D transistors literally rise up off of the die to dramatically reduce power consumption while increasing performance.
In a traditional planar transistor, current flows on a flat surface like a river. A gate, which ostensibly controls that flow, lies across the top of that river with contact only along a small surface. With a finFET, or 3D tri-gate, the flow of power spans a fin that juts from the surface. Instead of just contacting the surface along one dimension, the gate encircles it and makes contact on three sides.
Intel says this gives it far greater control of power and enables it to drive the signal harder while adding only a small amount to the build cost. Despite having similar architectural underpinnings to Sandy Bridge, Ivy Bridge should provide better performance while consuming significantly less power than an equivalent SB processor. So far, that seems to be panning out. A typical performance Sandy Bridge chip, such as the 3.4GHz Core i7-2600K is rated at 95 watts. The new 3.5GHz Core i7-3770K is rated at 77 watts. And those are higher-performing processors. The promise of tri-gate should pay even more dividends at lower power thresholds. Right now, Intel is only detailing its quad-core parts. Dual-core CPUs haven't been announced yet but we'll be curious to see how aggressively Ivy Bridge performs in notebook.
Ivy Bridge isn't just a process story, though. It's about keeping the chains moving. If, after all this investment in 3D transistors, the damned CPU isn't any faster, no one would care if it were made out of the purist shimmering samite. Fortunately, that isn't the case, which you can see in the chart on the third page. But first, let's break it down two ways: Even Intel says Ivy Bridge isn't a big step forward for pure x86 performance, as it's largely a die shrink of the Sandy Bridge core. The cache remains the same and the base clocks are similar. Where Ivy Bridge appears to have an edge in x86 performance is in its lower power envelope. As you know, Intel essentially overclocks, or "Turbo Boosts," the chip based on how much power it's eating and how hot it's running. So if a chip can run cooler and consume less power than its counterpart, it can run at a higher turbo clock for longer.
Where Intel seems to have put most of its focus this time is in the GPU side. In fact, Intel says it has achieved roughly a doubling of the graphics performance over the Sandy Bridge processors. The improvement is good enough that the company says Ivy Bridge processors are capable of playing 100 games out of the box while Sandy Bridge could only play 50. Detailed info about Ivy Bridge's graphics capabilities is on the second page, but suffice to say, it's obviously better. Is it enough to forego a discrete GPU?
For certain uses—such as an HTPC or all-in-one PC that won't be used primarily as a gaming machine, yes. Of course, notebook users will also be pleased to get more graphics performance from the newer Ivy Bridge parts.
Overclocking
With Ivy Bridge, Intel maintains the "K" versions that it introduced with its Lynnfield procs and continued on with Sandy Bridge. Like Sandy Bridge, Ivy Bridge isn't hugely tolerant of bclock, or base clock, overclocking. Intel says the most you should expect is a 7 percent bclock nudge before things go sideways. Instead, overclocking will continue to rely on upping the Turbo Boost or clock ratios. Intel has enhanced Ivy Bridge a bit by increasing the maximum core ratio overclock from 59 on Sandy Bridge to 63. Ivy Bridge also now lets you change the core ratios in real time. Graphics support a greater range for overclocking, too, and Ivy Bridge will let you run the RAM up to DDR3/2667 through overclocks (DDR3/1600 is the official speed.)
Compatibility
We've long railed against Intel for releasing new sockets with new CPUs (remember the short-lived Socket LGA1156 and Socket 423 and numerous LGA775 versions?), but the company has stepped up to the plate for the Sandy Bridge-to-Ivy Bridge transition. As Intel promised, most LGA1155 boards will support Ivy Bridge procs if the firmware and BIOS are updated to support the chip by the board maker. However, not all chipsets will make the Ivy Bridge cut. Intel has intentionally left out support for the business chipsets Q65, Q67, and B65 while supporting consumer H61, H67, P67, and Z68. Why leave some out? Intel believes the day of an IT shop getting down and dirty and upgrading processors in an office-drone PC are long gone, so there's just no reason to expend the resources on unnecessary support. Besides getting the latest core technology from Intel, switching to Ivy Bridge on older 6-series boards should also give you PCIe 3.0 support on some slots.
Ivy Bridge vs. Sandy Bridge: Beneath the Surface
The 22nm-based Ivy Bridge processor is considerably smaller than its predecessor. It has nearly 400 million more transistors yet is about 25 percent smaller. What's more interesting, however, is how much real estate is dedicated to each task on the new Ivy Bridge vs. Sandy Bridge. These die shots (not to scale) show that the almost 2x performance bump in graphics comes at the price of die space. Intel, however, discounts any criticism regarding how much emphasis it placed on graphics over x86 functionality and says just because it looks like more space was expended on graphics doesn't mean it's more important. Um, OK.
Sandy Bridge
Ivy Bridge
Next up: The 7-series chipset and graphics benchmarks!
7-series Chipset Brings Few Changes
Gigabyte's GA-Z77X-UD5H features out-of-the-box Ivy Bridge support and USB 3.0 ports powered by Intel!
Conspiracy theorists, unite: If you're one of the tin-foil hat wearers (this means you, Nathan Edwards) who was absolutely certain Intel was trying to sandbag USB 3.0 in order to push Thunderbolt, the new Z77 chipset puts your suspicions to rest. The Z77, you see, finally brings native USB 3.0 support to the world of Intel. Why all the fuss over native support? First, it cuts the cost of a board, slightly, since the board maker has one less chip to supply. Generally, performance and compatibility of integrated USB 3.0 tends to be better, too. Finally, native support means USB 3.0 in just about every new PC going forward. That means more devices and lower costs, which, as Admiral Kirk says, is better for me, better for you, and (pause) better for them.
Native USB 3.0 won't extend to all ports on a motherboard, though. The Intel PCH supports up to four USB 3.0 ports, so on motherboards that offer more than that, it'll be a mix of USB 2.0, Intel USB 3.0, and third-party USB 3.0 support. On the Gigabyte GA-Z77X-UD5H that we used, for example, it had four USB 3.0 ports on back with an additional three USB 3.0 headers. This was done by using the Intel chipset support and a discrete controller from VIA.
Beyond USB 3.0, the 7-series chipsets is a fairly incremental update. SATA support, for example, is the same weak-sauce mix of two SATA 6Gb/s and four SATA 3Gb/s. When we critically asked why not all 6Gb/s ports? Intel threw it back in our face by saying that backward compatibility with the 6-series boards was important to keep costs down on the 7-series boards. And since we're always whining about backward compatibility, isn't that important? Well, yes—but this is the last time, Intel. The 5-series, 6-series, and now 7-series have all shared the same SATA 6Gb/s configuration, so we better not see the 8-series with it, too.
Other key differences between 7-series and 6-series are support for three displays using Ivy Bridge's graphics chip, and of course, support for both Sandy Bridge and Ivy Bridge chips out of the box. Is there a performance difference? Frankly, no. For our tests, we used a Gigabyte GA-Z77X-UD5H board, first with the Core i7-3770K, which we then swapped out for a Core i7-2600K. We then re-ran our benchmarks and compared them to our Z68/2600K numbers. The difference? Nada, other than the weird, unexplainable bogies we had with a couple of benchmarks. The two, frankly, are essentially indistinguishable. Even the Intel USB 3.0 support didn't prove to be superior to any of the discrete USB 3.0 chipsets we've seen. So if you're considering whether to move from Z68 just to upgrade, we don't recommend it. However, if you're building a new box on an Ivy Bridge processor, we'd build on Z77 just to have the latest chipset.
Ivy Bridge Graphics
It's what everyone's been waiting for. Does Intel deliver?
We're all Charlie Browns when it comes to Intel graphics. Intel, of course, is Lucy, pulling the graphics eye-candy football away after promising that this time will be different. Once again, Intel is promising that this generation of the GPU built into the upcoming Ivy Bridge 22nm CPU will be different. Honest!
Several years ago, Intel promised to speed up its graphics core by 10x per generation—and that 10x speedup would start with Ivy Bridge. With Ivy Bridge almost upon us, it's worth diving in to its internal architecture to understand what's really changed.
Based on what we know about DirectX 11 compute shaders and the OpenCL 1.1 implementation, it looks like Intel's new GPU is getting a pretty robust set of compute-capable shaders. That's an encouraging sign, as is support for hardware tessellation.
Those are the gross differences. Internally, the GPU has been redesigned from the ground up. The GPU is partitioned into five domains. The global asset area takes care of geometry. It includes geometry, vertex, and hull shaders, plus the tessellator. Setup is also in this section. The resulting output is fed into the thread dispatch engine to the execution units (EUs), which do a lot of the heavy lifting. After the EUs are done, the render section takes over.
Intel's hardware tessellation engine is fixed-function, but can accept different cues for setting the overall level of tessellation. The EUs have been beefed up, with each EU offering 2x the performance per watt of Sandy Bridge. The higher-end HD 4000 has 16 EUs, up from 12 on Sandy Bridge's GPU. Intel also added an L3 cache to the GPU, which improves overall throughput, since data doesn't need to be fed to the GPU from the ring bus as frequently. This also saves on overall power.
One of the key performance-enhancing features is co-issue of instructions to execution units. Sandy Bridge supported this on some operations, but Ivy Bridge extends this to many more operations.
How does this affect actual performance with PC games? We ran a few tests on very early drivers. What we saw was definitely encouraging.
Even with early drivers, we're seeing about a 25 percent or better increase with 3D games. You'll still need to sacrifice some detail levels, but you'll get acceptable performance in all but the most bleeding-edge games. Titles like StarCraft II, Civilization V, Modern Warfare 3, and Portal 2 will probably run fine, if you're willing to dial back resolution, turn off AA, and run at medium or lower detail levels. It's probably best to steer away from highly demanding titles, though, such as Deus Ex: Human Revolution or The Witcher 2.
Note that 3DMark 2011 actually runs, giving clear evidence that Ivy Bridge is indeed DirectX 11 compliant. That's not a big score, but the fact of the score is encouraging. As with Sandy Bridge, Ivy Bridge includes a dedicated, fixed-function video encoder. Intel is claiming a nearly 2x encode advantage over Sandy Bridge, but that will depend on the application and workload. We saw only a 6 percent gain over Sandy Bridge when encoding an HD video file for iPhone using CyberLink's Media Espresso 6.5 (295 seconds for Ivy Bridge versus Sandy Bridge's 311 seconds.) Encoding performance is likely to be better with stereoscopic content, for example.
Finally, the new GPU, in conjunction with motherboards using Intel's 7-series chipsets, will support up to three simultaneous displays. As with Sandy Bridge, DVI support will be limited to single link only, but that will only affect a handful of users with older 30-inch monitors. Full bandwidth support for very high resolutions will be available through DisplayPort 1.2 or HDMI 1.4a.
Overall, Ivy Bridge's graphics are clearly better. Desktop users who are regular PR gamers will definitely want to stick with their favorite discrete graphics card, but owners of Ivy Bridge ultrabooks might be able to get a reasonable gaming fix now—provided the unit is built with the HD 4000. It's unlikely that the HD 2500 will be much use for gaming. –Loyd Case
Next up: The benchmarks!
Ivy Bridge vs. the Benchmarks
New kid proves itself to be the new standard bearer
For our testing, we used a Gigabyte GA-Z77X-UD5H motherboard using the new Z77 "Panther Point" chipset. To this, we added a 3.5GHz Core i7-3770K and installed a fresh copy of 64-bit Windows 7 Professional along with 8GB of DDR3/1600, a GeForce GTX 580 card, and a 150GB Western Digital Raptor. For benchmarks, we reached for the same set of mostly CPU-dependent benchmarks that we've used to review the last few rounds of processors.
For direct comparisons, we decided to pit the new 3770K against the Core i7-2600K and Core i7-3820. Why not the Core i7-2700K, which is the same clock as the Core i7-2600K? First, there's but a 100MHz difference between the Core i7-2600K and the new Core i7-3770K and both are priced the same. The Core i7-2700K has always been a bit of an odd duck part to us. You pay $25 over a 2600K and really only get 100MHz more megahertz. Why bother? Obviously, the LGA2011 Core i7-3820 can't be tested in the same board as Core i7-3770K, so we used our old standby: the Asus P9X79 Deluxe.
For reference, we also included in our chart the performance numbers of the Core i7-3960X, AMD's octo-core FX-8150, and the classic Corei7-990XE "Gulftown." While the last two platforms also had to use different motherboards, we tried to normalize as much as possible by clocking the RAM the same and using the same graphics cards and drivers.
The test suite includes everything from 3D modeling tests, to video editing and video transcoding, to several synthetic benchmarks and a few gaming tests with the resolutions cranked down low enough to take the graphics card out of the equation.
While we included six-core and eight-core processors in the chart, this is really about Intel's quads. Three scenarios come up: Do you buy a Sandy Bridge or Ivy Bridge for your new build? Should you upgrade from your Sandy Bridge to Ivy Bridge? Should you just bypass Ivy Bridge for Sandy Bridge-E or a hexa-core chip?
Click to enlarge
Let's dig into the numbers
When we look at all three quad cores it's clear that Ivy Bridge has a performance advantage over the Sandy Bridge part in just about every benchmark. Across the board, we generally saw from 5 to 15 percent in favor of the Ivy Bridge. In fact, the only place where Ivy Bridge was slower was in 3DMark's GPU test and Dirt 2. Why? Frankly, we don't know. We actually expected the scores to be fairly close, with Ivy Bridge slightly ahead of the pack, but for baffling reasons it was slower in these tests. Even more baffling, an exact duplicate of our configuration at Gigabyte HQ put the numbers where they should have been. What's going on? We're not sure, as we swapped every component possible in an attempt to find out where the gremlin was but could not root it out.
Despite these two anomalies, it's pretty clear that Ivy Bridge is faster over the similarly priced Sandy Bridge part. The real shocker was its competiveness with the Core i7-3820 in some benchmarks. We thought the Core 7-3820's base clock advantage of 200MHz and quad-channel memory would put it in front, but that wasn't always the case. In some benchmarks, the Core i7-3770K was ahead by a small, but measurable margin of 3 to 6 percent.
One interesting benchmark to examine here is the Cinebench 10 Single Core test. That's where we have Cinebench 10 render runs only on a single core instead of across all cores. This is probably the best indication of how efficient Ivy Bridge's cores are against the different generations of chips here: Sandy Bridge, Westmere, and Bulldozer. It's just no contest. Ivy Bridge's core is about 15 percent faster than Sandy Bridge's, 9 percent faster than Sandy Bridge-E's, 34 percent faster than Westmere's and an incredible 73 percent faster than Bulldozer's here. Don't think that gives Ivy Bridge a definitive edge over the big boys, though. Despite each core being faster, more cores still matter if your application uses them. Even the ancient Core i7-990XE has an edge over the Core i7-3770K in many of our multithreaded benchmarks. We will be honest, though—the margin isn't as great as we would have expected.
But let's get back to our questions: Do you buy a Sandy Bridge or Ivy Bridge for your new build? This one's easy. Ivy Bridge, my friend. With the price of 2600K and 3770K exactly the same, there's really no reason to buy a 2600K unless you're limited by your motherboard's support for it.
Should you upgrade from your Sandy Bridge to an Ivy Bridge? No. It would be foolish to think that just because Ivy Bridge is here your Sandy Bridge chip is a piece of junk. The only reason we could see upgrading is if you're coming from a lower-end, limited Sandy Bridge chip or need better integrated graphics, but otherwise, Sandy Bridge has plenty of life left in it.
Should you just bypass Ivy Bridge for Sandy Bridge-E or a hexa-core chip? That question can't be answered by us. It has to be answered by your computing needs. While we think Ivy Bridge is a hell of a chip, it's not faster than a hexa-core, even an older one, on thread-heavy tasks like 3D rendering and modeling, video encoding, and other content creation jobs. We still recommend that if you compute for a living, using thread-heavy tasks, it's worth the stretch for a hexa-core chip such as the Core i7-3960X or Core i7-3930K. All that aside, we think the Core i7-3770K is the new king of the midrange. Yes, it's hard to have the same enthusiasm we had when the Core i7-2600K first arrived and wiped the floor with all other CPUs, but you shouldn't discount Ivy Bridge. It's fast, it's cheap, and it's cool. What more could you ask for?