Google Search

Saturday, September 29, 2012

Best Gaming CPU: $200 And Up Price Range

Core i5-3570K
Codename:  
Process: 22 nm
CPU Cores/Threads: 4
Clock Speed (Max. Turbo): 3.4 GHz (3.8 GHz)
Socket: LGA 1155
L2 Cache: 4 x 256 KB
L3 Cache: 6 MB
Thermal Envelope:77 W


The Core i5-3570K is  300 MHz faster than the Core i5-3450 at stock speeds,The K-series' unlocked ratio multiplier is a must-have for overclockers looking to unleash significant performance improvements. It is for this reason alone that a gamer should shell out the extra $30 over Intel's slower model. After all, the pricier chip's HD Graphics 4000 is hardly relevant when you plan to use a discrete card anyway.
If you don't plan to overclock, then we think that there's little reason to look past the Core i5-3450.
Read our review of the Ivy Bridge-based CPUs
CPUs priced over $230 offer rapidly diminishing returns when it comes to game performance. As such, we have a hard time recommending anything more expensive than the Core i5-3570K, especially since this multiplier-unlocked processor can be overclocked to great effect if more performance is desired.,it meets or beats the $1000 Core i7-990X Extreme Edition when it comes to gaming.

But now that LGA 2011 is here, there's certainly an argument to be made for it as the ultimate gaming platform. LGA 2011-based CPUs have more available cache and as many as two more execution cores than the flagship LGA 1155 models. Additionally, more bandwidth is delivered through a quad-channel memory controller. And with 40 lanes of third-gen PCIe connectivity available from Sandy Bridge-E-based processors, the platform natively supports two x16 and one x8 slot, or one x16 and three x8 slots, alleviating potential bottlenecks in three- and four-way CrossFire or SLI configurations.

Although they sound impressive, those advantages don't necessarily translate into significant performance gains in modern titles. Our tests demonstrate fairly little difference between a $225 LGA 1155 Core i5-2500K and a $1000 LGA 2011 Core i7-3960X, even when three-way graphics card configurations are involved. It turns out that memory bandwidth and PCIe throughput don't hold back the performance of existing Sandy Bridge-based machines.
Where we do see the potential for Sandy Bridge-E to drive additional performance is in processor-bound games like World of Warcraft or the multiplayer component of Battlefield 3. If you're running a three- or four-way array of graphics cards already, there's a good chance that you already own more than enough rendering muscle. An overclocked Core i7-3960X or -3930K could help the rest of your platform catch up to an insanely powerful arrangement of GPUs.

 while we generally recommend against purchasing any gaming CPU that retails for more than $220 from a value point of view (sink that money into graphics and the motherboard instead), there are those of you who have no trouble throwing down serious money on the best of the best, and who require the fastest possible performance available. If this describes your goals, the following CPU is for you:

Best Gaming CPU for $570:
Core i7-3930K

Core i7-3930K
Codename: Sandy Bridge-E
Process: 32 nm
CPU Cores/Threads: 6/12
Clock Speed (Max. Turbo): 3.2 GHz (3.8 GHz)
Socket: LGA 2011
L2 Cache: 6x 256 KB
L3 Cache: 12 MB
Thermal Envelope:130 W
Take the $1000 Core i7-3960X, remove 3 MB of L3 cache, and drop the base clock rate by 100 MHz. What do end up with? Four hundred dollars and change left over, and an Intel Core i7-3930K.
The 100 MHz difference in clock rate is hardly relevant, given unlocked multiplier ratios benefiting both CPUs. And you'd be hard-pressed to quantify the advantage of 15 MB of shared L3 cache over 12 MB. Moreover, a greater-than-$400 savings lets you buy a nice motherboard and cooler, while still getting the same four-channel memory subsystem and 40-lane PCI Express 3.0-capable controller.

Monday, September 24, 2012

NVIDIA GeForce GTX 660 2GB

Just a month ago, NVIDIA released its GeForce GTX 660 Ti, which was received to rather critical acclaim. Specs-wise, the card isn't too far off from the GTX 670 which costs $100 more. But at $300, the card wasn't exactly "affordable" by all standards. NVIDIA knew it had a duty to finally deliver a mainstream Kepler part as close to $200 as possible, and that's resulted in the $229 non-Ti GTX 660.
The GTX 660 is equipped with 960 cores, vs. 1344 with the Ti. That comparison alone can give us an idea of what to expect here. Well, it would be easy if NVIDIA, in its usual way, didn't give the core clock a nice boost on the non-Ti edition. Lesser cores, but +65MHz to the clock. An interesting move, and not one that anyone will complain about.
NVIDIA GeForce GTX 660
Memory density and general architecture layout remain similar between the two cards, although while the 660 Ti is based on the GK104 chip, this non-Ti version uses GK106. Whereas typical GPCs, or Graphics Processing Cluster, have two SMX units per, GK106 splits one right down the middle, as the following diagram shows:
NVIDIA GeForce GTX 660 - GK106
This is an odd design, but won't result in some sort of bottleneck as all SMX modules interface with their respective raster engine rather than each other.
Though this non-Ti edition of the GTX 660 drops the core count quite significantly, its increase in core frequency negates some of the additional power savings we would have seen. For that reason, the non-Ti is rated at 140W, vs. 150W of the Ti.

CoresCore MHzMemoryMem MHzMem BusTDP
GeForce GTX 69030729152x 2048MB6008256-bit300W
GeForce GTX 680153610062048MB6008256-bit195W
GeForce GTX 67013449152048MB6008256-bit170W
GeForce GTX 660 Ti13449152048MB6008192-bit150W
GeForce GTX 6609609802048MB6000192-bit140W
GeForce GTX 65038410581024MB5000128-bit64W
GeForce GT 6403849002048MB5000128-bit65W
GeForce GT 630968101024MB3200128-bit65W
GeForce GT 620967001024MB180064-bit49W
GeForce GT 610488101024MB180064-bit29W
 
Alongside the GTX 660, NVIDIA has also launched the GTX 650, although availability at this point is nil. Despite being briefed on both of these cards at the same time, I haven't found a single review online of the GTX 650, so it's to be assumed that NVIDIA isn't rushing that product out too fast, and if I had to guess, it's a card that exists only to finish off the sequential numbering system. Laugh all you want, but imagine the table above without the GTX 650. That'd look rather odd, wouldn't it?
That aside, for NVIDIA to call it the GTX 650 is a bit of an insult to the GTX name. In no possible way does this GPU deserve it - GTX has traditionally represented cards that could more than handle games being run with lots of detail and at current resolutions. "GTX" is certainly suitable for the 660, but how did NVIDIA deem the 650 worthy when it slots just barely in front of the $100 GT 640? Maybe next we'll see Porsche release a 4 cylinder 911 Turbo S.
Rant side, the vendor to provide us with a GTX 660 sample is GIGABYTE. Unfortunately, it's an "OC Version", which means we are unable to deliver baseline GTX 660 results (I am not keen on forcing turbo adjustments). Making matters a bit worse, the OC isn't that minor. Memory remains the same, but the core gets +73MHz tacked on. An OC like this is great for consumers, but tough on reviewers who'd like to compare GPUs fairly.

With all of the launches NVIDIA's done in the past couple of months for Kepler, it was difficult to explore GTX 660 with anything more than minimal enthusiasm. However, from what we've seen throughout all of our testing, the GTX 660 is actually quite an impressive card, and possibly one of the most important to NVIDIA's entire 600 series line-up.
The reason for that boils down to the affordable price-point, and the performance it delivers. A $229 card that can handle a graphically gorgeous game like Battlefield 3 at Ultra detail at 1080p? Do I really need to explain why that's awesome?
Unfortunately, we didn't have a GPU that was directly comparable to this one, so there's really no apples to apples comparison. AMD's Radeon HD 7850 comes closest, at about $200. In that match-up, we saw the GTX 660 consistently perform better than the HD 7850, with the lowest gains being seen in the heavily AMD-favored DiRT: Showdown. Performance increases of 10-20% were not uncommon. In the also AMD-favored SHOGUN 2, the GTX 660 averaged 50% faster.

Saturday, September 22, 2012

EVGA GTX 680 SUPERCLOCKED SIGNATURE 2

 we are going to talk about a graphics card which is called EVGA GeForce GTX 80 SC Signature 2:
Interestingly, the official EVGA website says that this product can only be ordered singly. Only one device for one buyer. The company seems to regard this graphics card as something special if they take such a careful approach to its distribution. So, let’s see what secrets are hidden under the name of EVGA GeForce GTX 680 SC Signature 2.

Technical Specifications

The detailed technical specifications of the new EVHA GeForce GTX 680 SC Signature 2 card are summed up in the table below side by side with those of the reference Nvidia GeForce GTX 680 (the differences are marked with bold font):
 
 

Performance

3DMark 2011

Metro 2033: The Last Refuge

Aliens vs. Predator (2010)

Total War: Shogun 2

Crysis 2

Battlefield 3

Sunday, August 12, 2012

Do Graphics Cards Need 4 GB of Memory?

Putting more memory on a graphics card is often not a real improvement but a marketing trick targeted at inexperienced users. Most often it is employed with entry-level solutions, such as GeForce GT630, which do not need more than 1 gigabyte of onboard memory for any applications they can cope with. However, this doesn’t prevent their manufacturers from installing additional memory, up to a fantastic 4 gigabytes, to make them look more attractive in the buyer’s eyes.

It’s different with top-end solutions such as Nvidia’s GeForce GTX 680 and 670 and AMD’s Radeon HD 7950 and 7970. The AMD cards have 3 gigabytes of memory by default, which seems to be quite enough, but Nvidia is already criticized for equipping its Kepler-based GeForce series products with only 2 gigabytes of GDDR5 memory. Meanwhile, the GK104 Kepler GPU can actually work with either 2 or 4 GB of memory, according to its specifications, and some manufacturers have used this opportunity. One of them is EVGA whose GeForce GTX 670 4GB Superclocked+ w-Backplate is going to be reviewed in this article.

The EVGA GeForce GTX 670 Superclocked with 4 GB of memory is three times as fast as the regular GeForce GTX 670 and twice as fast as the overclocked GeForce GTX 680. That would be perfect if its frame rate were not as low as 14-15 fps, bottoming out to 5-6 fps. Of course, there’s no talking about smooth gameplay at such a low speed. Still, we can note that the double amount of memory does provide some benefits here.






Conclusion

Increasing the amount of memory on board of GeForce GTX 670 and GTX 680 cards translates to obvious performance benefits only in specific unique cases, such as triple-monitor set-ups with 3240x1920 resolution and enabled antialiasing. Metro 2033: The Last Refuge and Sniper Elite V2 are the only games that need more than the standard 2 GB of graphics memory, but the contemporary High-End graphics cards are anyway too slow in these games even with 4 GB of video memory. In the rest of our games we could hardly see any difference between GeForce GTX 670s with 2 and 4 GB of memory in 3240x1920 and no difference at all in 2560x1440. So, purchasing a 4GB card wouldn't be worth the investment unless you've got a triple-monitor configuration. But if you do have one, 4GB graphics cards really make sense for 2-, 3- and 4-way SLI configurations and playing contemporary games at high resolutions.by sergey lepilov

 

Monday, August 6, 2012

Nvidia Geforce GTX 660 ti Finally Here?

.
Zoom

A Swedish retailer posted a pre-order opportunity of the Asus GeForce GTX 660 Ti DirectCU II graphics card  a price of just under $400 for this card.

Based on Kepler architecture, the 660 Ti will run on 1,344 cores and integrate 2 GB of GDDR5 memory using a 192-bit memory bus (with 144.2 GB/s bandwidth). The card is nearly identical in its specs to the GTX 670 (which has a 256-bit memory interface with 192.2 GB/s bandwidth), which sells in the $400 neighborhood as well, but we expect the new card to use less power and distance itself in price. The price mentioned on the Swedish site may be a pit premature and optimistic. A $350 target for volume cards seems more realistic to us, but we'll find out soon.by wolfgang gruener

Thursday, August 2, 2012

Nvidia GeForce GTX 690 SLI Benchmarks

Performance

The two semi-synthetic benchmarks and tech demo ran at their default settings in two graphics quality modes.whereas Unigine Heaven doesn't permit to set a resolution higher than 1920x1080 pixels. Anyway, we can still compare our graphics subsystems in these benchmarks.

3DMark Vantage


3DMark 2011

We’ve got the same picture in the newer version of 3DMark, except that the two GTX 690s in 4-way SLI mode enjoy a larger advantage over the single GTX 690.

Unigine Heaven Demo

Unigine Heaven is only capable of getting the best from our graphics subsystems at the highest settings with 8x antialiasing. The standings are the same as in the two versions of 3DMark, though.
Now let’s see what we have in real games.

S.T.A.L.K.E.R.: Call of Pripyat

There’s nothing extraordinary about these results except that the two Radeon HD 7970 GHz Edition cards are ahead of the single GTX 690.

Metro 2033: The Last Refuge

Except for the bottom speed, the two GeForce GTX 690s are just splendid here. No other graphics card or multi-GPU subsystem has ever delivered such a high frame rate in Metro 2033: The Last Refuge.

Just Cause 2

The results of this test are more predictable and easier to explain:
We can see that the single-GPU flagship products from AMD and Nvidia cannot make the game playable with comfort at 3240x1920 pixels, although Just Cause 2 was released over 2 years ago. The CrossFireX and SLI configurations are most appropriate here

Aliens vs. Predator (2010)

The overall picture is like in the previous test:
One top-end graphics card wouldn’t be enough if you’ve got as many as three HD monitors. Take note of the high efficiency of the CrossFireX tandem built out of two Radeon HD 7970 GHz Edition cards. At the highest settings it is almost as fast as the two GTX 690s.

Lost Planet 2

The overclocked GeForce GTX 680 is ahead of the Radeon HD 7970 GHz Edition at 2560x1440 but the latter overtakes it when we switch to the multi-monitor configuration, even though the Lost Planet 2 engine is optimized for Nvidia's architecture. There's nothing we can add about the SLI and CrossFireX tandems except that the GTX 690s seem to be limited by the platform's performance at 2560x1440 pixels.

Sid Meier’s Civilization V

If you thought Civilization V a light application, you may want to reconsider after seeing the results of the single top-end graphics cards at the resolution of 3240x1920 pixels: 23 fps with the GTX 690 and 38 fps with the HD 7970 GHz Edition. So, that's where the SLI and CrossFireX tandems are going to come in handy. And we should also note that AMD routs its opponent here.

Total War: Shogun 2

The same goes for this test:
Nvidia is more or less competitive at the classic resolution of 2560x1600 pixels, but AMD’s solutions are unrivalled with our multi-monitor configuration at 3240x1920. Total War: Shogun 2 obviously needs more memory than 2 gigabytes installed on board the GTX 680 and 690 (per each GPU). Coupled with the 256-bit bus, the memory bandwidth slows the fast Kepler GPU down. Alas, this is not the only game in this test session where Nvidia's solutions behave like that.

Crysis 2

Like Metro 2033: The Last Refuge, this game may occasionally slow down very much, but the standings are overall the same as in most other tests. You need at least a single GeForce GTX 690 for the triple-monitor configuration or, better yet, two Radeon HD 7970 GHz Edition cards which have a much higher bottom frame rate.

Hard Reset Demo


Conclusion

Combined into a 4-way SLI configuration, two Nvidia GeForce GTX 690 cards are indeed overkill for ordinary users who play their games on a single monitor, even a high-resolution one. Well, we couldn't have any doubts about that really. The more surprising outcome of this test session is that such a tandem can indeed be helpful for a triple-monitor setup with a resolution of 3240x1920 pixels because a single GTX 690 wouldn’t cope. Such a high resolution translates into a very high load on the graphics subsystem, especially if you also enable full-screen antialiasing.
But the biggest surprise is that Nvidia isn’t quite ready for that resolution with its Kepler-based graphics cards.by sergey lepilov

Tuesday, July 31, 2012

Nvidia Geforce GTX 660 Ti Specifications and Launch Date Rumored

Nvidia set to release its new Kepler-based GTX 660 Ti, replacing the GTX 560 Ti. originally reported that the GTX 660 Ti would not hit the market until Q3, but now all signs point to an August 16th release. Nvidia could be using the upcoming GamesCom event as the stage to release its new card.
SpecificationGeforce GTX 660 TiGeforce GTX 670
ArchitectureKeplerKepler
Technology28 nm28 nm
GPUGK104 (?)GK104
CUDA cores1344 1344
Base frequency915 MHz915 MHz
Boost Frequency980 MHz980 MHz
Memory Bus192-bit256-bit
Amount of memoryGB 2 GDDR5GB 2 GDDR5
Memory Frequency6008 MHz6008 MHz
TDP150 W170 W

The GTX 660 Ti looks to be basically the same card as the GTX 670, except with 192-bit memory bus and a lower TDP of 150 W. It is based on the Kepler architecture with 1344 CUDA cores (192*7), and equipped with four video outputs (two DVI, HDMI and DisplayPort), and is powered via two 6-pin PCI Express connectors. We got an early glimpse of what a stock Evga GeForce GTX 660 Ti card looks like from a leaked image coming from Expreview. by doug crowthers source sweclockers
Image Leaked by: Expreview