192-bit or 256-bit
Posted 23 November 2012 - 08:28 AM
I appreciate the help.
Posted 23 November 2012 - 09:24 AM
Posted 23 November 2012 - 12:17 PM Edited by shoumic, 23 November 2012 - 12:33 PM.
I would spend the extra $80 because im already spending like $300, why not a little more?
Posted 23 November 2012 - 12:33 PM
Posted 23 November 2012 - 02:27 PM
Posted 23 November 2012 - 08:40 PM
The only reason why I want to purchase a graphics card now is because I'm not going to have the funds to because I like to call this season "the spending season" because for some reason, I'm always ending up empty-pocketed by the end of the year. Any suggestions on how it's going to truly affect my performance in the later years?
Posted 24 November 2012 - 12:30 PM Edited by shoumic, 24 November 2012 - 12:33 PM.
Picked this up from a website.
| So will memory bandwidth amount to any real differences in video game performance? Most PC gamers have a PCI-Express 2.0 compatible motherboard inside their computer system, although the most recent motherboards and all of NVIDIA's GeForce GTX 600-series feature PCI-Express 3.0 compliance. Additionally, nearly all high-performance video cards feature at least 256-bit memory bandwidth or better (such as 384-bit with some AMD Radeon HD 7000-series graphics cards). However, most testing with these high-end graphics cards has shown little indication that bottlenecks actually occur at the PCI-Express level, even while playing the most demanding DirectX 11 video games. It's more likely that a bandwidth bottleneck might occur at the video card's memory subsystem, where the GPU may be capable of sending more information than the video frame buffer can accept. We discovered some evidence of this in our recent testing on the ultra-overclocked ASUS GeForce GTX 660Ti DirectCU-II TOP, which maintained performance levels with a stock GTX 670 in all but the most demanding video games that featured large maps or virtual worlds.|
Obviously the GPU plays the star role in creating a bottleneck, and while less powerful processors lack the raw number of transactions to saturate the memory subsystem this isn't so difficult with powerful Kepler-based GeForce GTX 600-series products. Because there are times when GeForce GTX 660 Ti's 192-bit memory bandwidth can become bottlenecked, what would happen if we split demand between two cards connected together into an SLI set? In theory, by combining two graphics cards together with SLI technology we're essentially doubling the memory bandwidth available to standard workloads. Given that 256-bit memory configurations suffer very few bandwidth bottlenecks while 192-bit can have occasional limitations, it seems plausible. Furthermore, the results could really help gamers decide on creating an SLI set of either GTX 660 Ti or GTX 670 graphics cards.
Since the GTX 660i is already bottlenecked now, it will struggle more later. Making it less futureproof.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users