General Gaming Article

General Gaming Article


Newegg Daily Deals: Fractal Design Define R5 Case, Toshiba 1TB HDD, and More!

Posted: 19 Jan 2016 12:23 PM PST

Fractal Design R5

Top Deal:

Building a PC is pretty easy, it's when you start aiming for specific traits that things can get a little challenging. For example, have you ever built a quiet PC? It takes a bit of research, the right parts, and of course a case that muffles sound is helpful too. That's the type of thing you can do with today's top deal -- it's for a Fractal Design Define R5 FD-CA-DEF-R5-BK Black Computer Case for $80 with free shipping (normally $110). It's equipped with sound dampening material to quiet fans to keep noise at a minimum.

Other Deals:

Toshiba 1TB 7200 RPM 32MB Cache SATA 6.0Gb/s 3.5-inch Internal Hard Drive Bare Drive for $40 with free shipping (normally $50 - use coupon code: [ESCEFFM26])

MSI GeForce GTX 960 DirectX 12 GTX 960 GAMING 4G 4GB 128-Bit GDDR5 PCI Express 3.0 x16 HDCP Ready SLI Support ATX Video Card for $230 with free shipping (normally $242; additional $20 Mail-in rebate)

HGST 1TB Ultra-Portable Drive USB 3.0 for $50 with free shipping (normally $60 - use coupon code: [EMCEFFM23])

G.Skill Ripjaws Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory for $30 with free shipping (normally $36)

Fractal Design's Nano S Case Brings Quiet Computing to Little Builds

Posted: 19 Jan 2016 12:00 PM PST

Silent but deadly

Fractal Design Nano S

Fractal Design says its new Nano S offers a unique combination of size (it's a compact enclosure for mini ITX builds), noise (it has several sound dampening qualities), and potential power (you can fit some full size components inside).

The case measures just 275mm (W) by 485mm (H) by 420mm (D) and weighs 4.6 kilograms, or 10.82 inches (W) by 19.09 inches (H) by 16.53 inches (D) and 10.14 pounds. It has about half the volume of a standard ATX form factor case, Fractal Design says.

Fractal Design's goal was to create a quiet case that offers "enthusiast-level builds" in a mini ITX footprint. To that end, it supports up to four storage drives, including a pair of dedicated 2.5-inch mounting points and two more that each support both 2.5-inch and 3.5-inch drive installations.

Graphics cards up to 315mm (12.4 inches) in length can fit inside. To put that into perspective, a reference GeForce GTX Titan X measures 10.5 inches long. This would also be a good time to remind anyone reading that AMD recently slashed the price of its Radeon R9 Nano to $499.

You can also fit a standard ATX power supply up to 160mm/6.3 inches deep and CPU cooler up to 160mm/6.3 inches high.

The Nano S sports half a design fan positions and comes with two fans, a 140mm and a 120mm. Filtered fan slots in the front and bottom eject from the front of the case to keep those dust bunnies from raising an army.

You can also opt to liquid cool -- the case can hold up to a 280mm radiator up front, up to 240mm up top, and up to a 120mm radiator on the bottom.

As for the quiet computing claim, Fractal Design decked out its Nano S with sound dampening material on both side panels, or just one if opting for the version with a side window.

You should be able to order the Nano S soon for $65 MSRP, or $70 with side Window.

Follow Paul on Google+, Twitter, and Facebook

November 2014: Ultimate Minecraft Mods

Posted: 19 Jan 2016 11:46 AM PST

Mpc112014

In the PDF archive of the November 2014 issue, you can find: 

  • Ultimate Minecraft Mods
  • Intel's new Haswell-E CPU
  • Coding Raspberry Pi
  • Build It: Pentium K budget gaming rig

September 2014: Dream Machine

Posted: 19 Jan 2016 11:11 AM PST

Dm2014

In the PDF archive of the September 2014 issue, you can find:

  • Dream Machine!
  • High-End Gaming Mouse Roundup
  • How to Calibrate Your Monitor 
  • Build It: A small Devil's Canyon rig

Intel Bakes Multifactor Authentication into 6th Generation Core vPro Platform

Posted: 19 Jan 2016 10:40 AM PST

Intel Authenticate brings security to a new level

Intel Core i7 vPro

Today is a "big day for business," Intel says, and that's because the world's largest semiconductor maker announced the availability of its 6th generation Core vPro processor family. 

It's Skylake meets vPro, which means better performance and enhanced security. Starting with the former, Intel's pitch focuses on businesses rocking older laptops. Compared to a 5-year-old laptop, Intel points out its 6th generation Core and Core vPro processors offer 2.5 times the performance and 3 times the battery life, while waking up 4 times faster. And on the desktop, businesses can expect a 60 percent performance jump compared to its 4th generation architecture (Haswell).

"Older laptops can cost businesses $4,203 per year, for every three PCs, in maintenance and lost productivity. New business PCs can help address this by delivering up to 2.5 times the performance and a 30 times increase in graphics performance over a 5-year-old device, providing users with much more productive and powerful business tools," Intel claims.

As we've already seen in the OEM space, some of the systems based on Skylake are trending towards sleeker, thinner, and lighter profiles than previous generation devices. That same type of portability is now transferring over to the business side.

"With incredible, new, eye-catching designs, added performance, and longer battery life, the 6th Gen Intel Core and Intel Core vPro processors are setting a new standard for business computing," said Tom Garrison, vice president and general manager for the Intel Business Client division. "By also adding enhanced security capabilities in the hardware, Intel has helped to make these newest PCs an integral part of a business's overall security solution, making users more secure and productive than ever before."

Increased Security

There have been a lot of security breaches over the past couple of years. According to Intel, over half of today's data breaches start with misused or stolen credentials.

To buck this trend, Intel is previewing a new security solution called Intel Authenticate. It's an embedded multifactor authentication technology that uses a combination of up to three identifying factors at the same time, those being something you know, something you have, and something your are.

"By doing so, the most common software based attacks that steal user credentials through viruses or malware are rendered ineffective. Intel delivers a secure PIN, a Bluetooth proximity factor with your Android or iPhone, a logical location factor with vPro systems and fingerprint biometrics. IT can choose the number and combination of factors they desire depending on their security needs and preferences for their users," Garrison explains, Vice President and General Manager of Intel Business and Client Platforms.

Intel Authenticate is available on all of Intel's 6th generation Core vPro and Core platforms, albeit in preview form for businesses to test.

You can find a list of Intel new vPro lineup here.

Follow Paul on Google+, Twitter, and Facebook

Star Wars Infiltrates List of 25 Worst Passwords of 2015

Posted: 19 Jan 2016 09:47 AM PST

These are not the passwords you're looking for

Star Wars

Disney's Star Wars: The Force Awakens proved to be box office hit with nearly $1.9 billion in worldwide ticket sales and counting since releasing generally on December 18, 2015 (there were limited viewings in select theaters a day before). That's impressive, the same of which can't be said for the passwords fans of the franchise are using.

Star Wars themed passwords have found their way onto SplashData's list of the 25 worst passwords of 2015, including "starwars" at No. 25, "solo" at No. 23, and "princess" at No. 21. All three are new additions to the list

"When it comes to movies and pop culture, the Force may be able to protect the Jedi, but it won't secure users who choose popular Star Wars terms such as "starwars," "solo," and "princess" as their passwords," SplashData said.

SplashData compiled its fifth annual list from over 2 million leaked passwords during the past year. Some of the new ones are longer than what the firm typically sees, potentially indicating that websites and web users are trying to be more secure.

"We have seen an effort by many people to be more secure by adding characters to passwords, but if these longer passwords are based on simple patterns they will put you in just as much risk of having your identity stolen by hackers," said Morgan Slain, CEO of SplashData. "As we see on the list, using common sports and pop culture terms is also a bad idea. We hope that with more publicity about how risky it is to use weak passwords, more people will take steps to strengthen their passwords and, most importantly, use different passwords for different websites."

Here's a look at the full list:

  1. 123456 (unchanged from 2014)
  2. password (unchanged)
  3. 12345678 (Up 1)
  4. qwerty (Up 1)
  5. 12345 (Down 2)
  6. 123456789 (Unchanged)
  7. football (Up 3)
  8. 1234 (Down 1)
  9. 1234567 (Up 2)
  10. baseball (Down 2)
  11. welcome (New)
  12. 1234567890 (New)
  13. abc123 (Up 1)
  14. 111111 (Up 1)
  15. 1qaz2wsx (New)
  16. dragon (Down 7)
  17. master (Up 2)
  18. monkey (Down 6)
  19. letmein (Down 6)
  20. login (New)
  21. princess (New)
  22. qwertyuiop (New)
  23. solo (New)
  24. passw0rd (New)
  25. starwars (New)

You have to take these lists with a grain of salt. It's impossible to know how many people are truly relying on dumb passwords like "123456" versus inputting something quick and simple to gain access to locked content.

Still, it's a reminder that horrible passwords do still exist, and even the Force can't do anything about it.

Follow Paul on Google+, Twitter, and Facebook

How Processors Work

Posted: 19 Jan 2016 12:00 AM PST

An in-depth look into what gives your computer its brain power

When asked about how a central processing unit works, you might say it's the brain of the computer. It does all the calculations on math and makes logical decisions based on certain outcomes. However, despite being built upon billions of transistors for today's modern high-end processors, they're still made up on basic components and foundations. Here, we'll go over what goes on in most processors and the foundations they're built on.

This graphic is a block diagram of Intel's Nehalem architecture that we can use to get an overview. While we won't be going over this particular design (some of it is specific to Intel's processors), what we'll cover does explain most of what's going on.

1 Hpw

The Hard Stuff: Components of a Processor

Most modern processors contain the following components:

  • A memory management unit, which handles memory address translation and access
  • An instruction fetcher, which grabs instructions from memory
  • An instruction decoder, which turns instructions from memory into commands that the processor understands
  • Execution units, which perform the operation; at the very least, a processor will have an arithmetic and logic unit (ALU), but a floating point unit (FPU) may be included as well
  • Registers, which are small bits of memory to hold important bits of data

The memory management unit, instruction fetcher, and instruction decoder form what is called the front-end. This is a carryover from the old days of computing, when front-end processors would read punch cards and turn the contents into tape reels for the actual computer to work on. Execution units and registers form the back-end.

Memory Management Unit (MMU)

The memory management unit's primary job is to translate addresses from virtual address space to physical address space. Virtual address space allows the system to make programs believe the entire address space possible is available, even if physically it's not. For instance, in a 32-bit environment, the system believes it has 4GB of address space, even if only 2GB of RAM is installed. This is to simplify programming since the programmer doesn't know what kind of system will run the application.

The other job of the memory management unit is access protection. This prevents an application from reading or writing in another application's memory address without going through the proper channels.

Instruction Fetcher and Decoder

As their names suggests, these units grab instructions and decode them into operations. Notable in modern x86 designs, the decoder turns the instructions into micro-operations that the next stages will work with. In modern processors, what gets processed into the decoder typically feeds into a control unit, which figures out the best way to execute the instructions. Some of the techniques that are employed include branch prediction, which tries to figure out what will be executed if a branch is to take place, and out-of-order execution, which rearranges instructions so they're executed in the most efficient way.

Execution Units

The bare minimum a general processor will have is the arithmetic and logic unit (ALU). This execution unit works only with integer values and will do the following operations:

  • Add and subtract; multiplication is done by repeated additions and division is approximated with repeated subtractions (there's a good article on this topic here)
  • Logical operations, such as OR, AND, NOT, and XOR
  • Bit shifting, which moves the digits left or right

A lot of processors will also include a floating point unit (FPU). This allows the processor to work on a greater range and higher precision of numbers that aren't whole. Since FPUs are complex, often enough to be their own processor, they are often excluded on smaller low-power processors.

Registers

Registers are small bits of memory that hold immediately relevant data. There's usually only a handful of them and they can hold data equal to the bit-size the processor was made for. So a 32-bit processor usually has 32-bit registers.

The most common registers are: one that holds the result of an operation, a program counter (this points to where the next instruction is), and a status word or condition code (which dictates the flow of a program). Some architectures have specialized registers to aid in operations. The Intel 8086, for example, has the Segment and Offset registers. These would be used to figure out address spaces in the 8086's memory-mapping architecture.

A Note about Bits

Bits on a processor usually refers to the largest data size it can handle at once. It mostly applies to the execution unit. However, this does not mean that a processor is only limited to processing data of that size. An eight-bit processor can still process 16-bit and 32-bit numbers, but it takes at least two and four operations, respectively, to do so.

The Soft Stuff: Ideas and Designs in Processors

Over the years of computer design, more and more ideas and designs were realized. These were developed with the goal of making the processor more efficient at what it does, increasing its instructions per clock cycle (IPC) count.

Instruction Set Design

Instruction sets map numerical indexes to commands in a processor. These commands can be something as simple as adding two numbers or as complex as the SSE instruction RSQRTPS (as described in a help file: Compute Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values).

In the early days of computers, memory was very slow and there wasn't a whole lot of it, and processors were becoming faster and programs more complex. To save both on memory access and program size, instruction sets were designed with the following ideas:

  • Variable-length instructions, so that simpler operations could take up less space
  • Perform a wide variety of memory-addressing commands
  • Operations can be performed on memory locations themselves, in addition to using registers, or as part of the instruction

As memory performance progressed, computer scientists found that it was faster to break down the complex operations into simpler ones. Instructions also could be simplified to speed up the decoding process. This sparked the Reduced Instruction Set Computing (RISC) design idea. Reduced in this case means the time to complete an instruction is reduced. The old way was retroactively named Complex Instruction Set Computing (CISC). To summarize the ideas of RISC:

  • Uniform instruction length, to simplify decoding
  • Fewer and simple memory addressing commands
  • Operations can only be performed on data in registers or as part of the instruction

There have been other attempts at instruction set design. One of them is the Very Long Instruction Word (VLIW). VLIW crams multiple independent instructions into a single unit to be run on multiple execution units. One of the biggest stumbling blocks is that it requires the compiler to sort instructions ahead of time to make the most of the hardware, and most general purpose programs don't sort themselves out very well. VLIW has been in use in Intel's Itanium, Transmeta's Crusoe, MCST's Elbrus, AMD's TeraCore, and NVIDIA's Project Denver (sort of, it has similar characteristics)

Multitasking

Early on, computers could do only one thing at a time and once it got going, it would go until completion, or until there was a problem with the program. As systems became more powerful, an idea called "time sharing" was spawned. Time sharing would have the system work on one program and if something blocked it from continuing, such as waiting for a peripheral to be ready, the system saved the state of the program in memory, then moved on to another program. Eventually, it would come back to the blocked program and see if it had what it needed to run.

Time sharing exposed a problem: A program could unfairly hog the system, either because the program really had a long execution time or because it hung somewhere. So the next systems were built such that they would work on programs in slices of time. That is, every program gets to run for a certain amount of time and after the time slice is up, it moves on to another program automatically. If the time slices are small enough, this gives the impression that the computer is doing multiple things at once.

One important feature that really helped multitasking is the interrupt system. With this, the processor doesn't need to constantly poll programs or devices if they have something ready; the program or device can generate a signal to tell the processor it's ready.

Caching

Cache is memory in the processor that, while small in size, is much faster to access than RAM. The idea of caching is that commonly used data and instructions are stored in it and tagged with their address in memory. The MMU will first look in cache to see if what it's looking for is in it. The more times the data is accessed, the closer its access time reaches cache speed, offering a boost in execution speed.

Normally, data can only reside in one spot in cache. A method to increase the chance of data being in cache is known as associativity. A two-way associative cache means data can be in two places, four-way means it can be in four, and so on. While it may make sense to allow data to just be anywhere in cache, this also increases the lookup time, which may negate the benefit of caching.

Pipelining

Pipelining is a way for a processor to increase its instruction throughput by way of mimicking how assembly lines work. Consider the steps to executing an instruction:

  1. Fetch instruction (IF)
  2. Decode instruction (ID)
  3. Execute instruction (EX)
  4. Access memory (MEM)
  5. Write results back (WB)

Early computers would process each instruction completely through these steps before processing the next instruction, as seen here:

2 Hpw

In 10 clock cycles, the processor is completely finished with two instructions. Pipelining allows the next instruction to start once the current one is done with a step. The following diagram shows pipelining in action:

3 Hpw

In the same 10 clock cycles, six instructions are fully processed, increasing the throughput threefold.

Branch Prediction

The major issue with pipelining is that if any branching has to be done, then instructions that were being processed in earlier stages have to be discarded since they no longer are going to be processed. Let's take a look at an situation where this happens.

4 Hpw

The instruction CMP is a compare instruction, e.g., does x = y? This sets a flag of the result in the processor. Instruction BNE is "branch if not equal," which checks this flag. If x is not equal to y, then the processor jumps to another location in the program. The following instructions (SUB, MUL, and DIV) have to be discarded because they're no longer going to be executed. This creates a five-clock-cycle gap before the next instruction gets processed.

The aim of branch prediction is to make a guess at which instructions are going to be executed. There are several algorithms to achieve this, but the overall goal is to minimize the amount of times the pipeline has to clear because a branch took place.

Out-of-Order Execution

Out-of-order execution is a way for the processor to reorder instructions for efficient execution. Take, for example, a program that does this:

  1. x = 1
  2. y = 2
  3. z = x + 3
  4. foo = z + y
  5. bar = 42
  6. print "hello world!"

Let's say the execution unit can handle two instructions at once. These instruction are then executed in the following way:

  1. x = 1, y = 2
  2. z = x + 3
  3. foo = z + y
  4. bar = 42, print "hello world!"

Since the value of "foo" depends on "z," those two instructions can't execute at the same time. However, by reordering the instructions:

  1. x = 1, y = 2
  2. z = x + 3, bar = 42
  3. foo = z + y, print "hello world!"

Thus an extra cycle can be avoided. However, implementing out-of-order execution is complex and the application still expects the instructions to be processed in the original order. This has normally kept out-of-order execution off processors for mobile and small electronics because the additional power consumption outweighs its performance benefits, but recent ARM-based mobile processors are incorporating it because the opposite is now true.

A Complex Machine Made Up of Simple Pieces

When looked at from a pure hardware perspective, a processor can seem pretty daunting. In reality, those billions of transistors that modern processors carry today can still be broken down into simple pieces or ideas that lay the foundation of how processors work. If reading this article leaves you with more questions than answers, a good place to get started learning more is Wikipedia's index on CPU technologies.

Total Pageviews

statcounter

View My Stats