Quote:
The general rule is if your SLI is on a 8x + 8x mother board there would not be any difference since both the cards work at 8x though 1st card is plugged in a 16x slot. It should matter however, if the both cards run at 16x
This "general rule" would apply if you were running an application that :
a) benefits from using multiple GPUs
b) is bandwidth constrained
It's pretty clear that these don't apply to PowerDirector, at least on my system, doing a basic re-rendering test with no effects.
FYI, I'm running a motherboard with dual x16 support. It's a Gigabyte GA-990FXA-UD3 .
Both GPUs are 560 Ti .
GPU-Z confirms both cards are running at PCI-E 2.0x16 speeds.
Rendering time is identical whether I have a single GPU (second GPU disabled in device manager), 2 GPUs without SLI (3 active displays), or 2 GPUs in SLI mode (only 2 active displays).
In SLI mode, I checked with GPU-Z and here is what I found :
- one GPU has GPU load going to about 50% while rendering . Memory controller load is 3% and video engine load 0% . When not rendering, all three indicators are at 0% . This is a GPU with the disabled display.
- the other GPU has 7% GPU load, 4% memory controller load, and 95% video engine load . When not rendering, the memory controller is at 15%, GPU load is 4%, and video engine at 0%. This is a GPU with 2 displays .
So technically it does seem that PowerDirector is using both GPUs when rendering. Too bad it actually doesn't speed up anything !
I reran the test with SLI disabled, and the GPU load, memory controller load and video engine load for both GPUs are identical to SLI mode.
Finally, I disabled one GPU completely in device manager and reran the rendering test.
In that case, strangely, GPU-Z still reported two devices. But it reported the same usage numbers for both.
The GPU load went a bit above 50%, memory controller to 8%, and video engine about 95%. I think it's a bug between GPU-Z and the nVidia drivers - only one device was enabled. I didn't bother physically pulling one of the cards.
I suppose there could be cases where multiple GPUs might help in theory, but I haven't found them.
Maybe if one was running lots of effects, 4K encode or something else. I don't know. Edit: looks like hardware acceleration is greyed out for 4K encodes, not sure if this is a 560Ti limitation or Cyberlink.
So far, the second card doesn't really do anything for me, except drive my 3rd display . I have two HP LP 3065 at 2560x1600 dual-link DVI on one card, and a 1920x1200 HDMI display in portrait mode on the second card. The 560Ti can only drive 2 digital displays at once.
A 600 or 700 series card could drive all 3 displays at once. But I won't upgrade unless there is some performance benefit to go along. More likely, I will wait for a decent and affordable 4K monitor for my next video card upgrade.
FYI, my test case was just re-rendering a 63 second 1080i AVCHD 24 Mbps clip from a camcorder to the same format.
Rendering time was 29 seconds for all cases, or about 2.1x . I suppose it's not so bad - I wish the second GPU scaled it to 4x, but unfortunately, it doesn't improve anything.
A full software encode, with hardware decoding also disabled, takes 51 seconds, which is 1.2x . There is quite a bit of difference between 1.2x and 2.1x, so the GPU is very welcome for long renderings. But only one GPU is needed.
I think the SLI essentially benefits 3D games mainly. This is actually how it's worded in the SLI configuration in the nVidia control panel :
The options are :
a) maximize 3D performance
b) span displays with Surround
c) activate all displays
d) disable SLI
Only options a and b will enable SLI . Options c and d disable SLI.
This message was edited 1 time. Last update was at Dec 12. 2013 03:32
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)
2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)