Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Multi-GPU support?
[Post New]
Previously I only had a single GTX Titan sitting in my system when I first started using PD12 and was happy with the performance.

Now, I have two GTX 780ti currently sitting inside my system and noticed that one card is basically not doing anything. Is there an option I need to turn on to get two working at the project? Or is it just not possible?

There seem to be other video editors that support multi-GPU processing, and it would be a shame to have this one not being able to do it.
molan1976 [Avatar]
Newbie Location: Copenhagen, Denmark Joined: Nov 20, 2013 17:19 Messages: 25 Offline
[Post New]
Nvidias own video conversion software doesn't support multi GPU either, so it might not be easy to make.

Not that I understand why, it is so.

This message was edited 1 time. Last update was at Dec 11. 2013 04:17

[Post New]
I have tried using Vegas Pro 12 (trial version) and it uses every possible computing unit in my system. Including my CPU and both of my 780ti.

PD12 only uses single 780ti for actual computing, and if it uses CPU I did not notice much CPU usage. I don't really care about CPU usage, but I would just like to see support for multi-gpu computing.
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
I spent some time yesterday moving cards between machines. I put 2 GTX 560 Ti in SLI .
Was a bit of a pain since they are 1 GB and 2 GB respectively.
But Coolbits=0x18 in the the registry did the trick.
Finally when I disabled my 3rd display, the nVidia 3.31.93 drivers let me enable SLI.

I benchmarked both with PD12 and PD11, and the SLI made zero difference to the performance.

In fact last year I bought a 680 just to see if it was any faster than my 560 Ti - it was not. I returned the card to Fry's the next day.

I would not bet that the 780Ti would be any faster for video encoding either. I may try the flagship again this year just to see if nVidia has made improvements with their video encoding performance. I doubt it.

FYI, even with this lack of SLI, PD12 with the 560Ti is faster at encoding than any other video program I have tried, including Sony Movie Studio 12 and Pinnacle 16.

If that matters, my CPU is AMD FX-8350 OC'ed at 4.6 Ghz with 32GB RAM. OS is W7 SP1.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
molan1976 [Avatar]
Newbie Location: Copenhagen, Denmark Joined: Nov 20, 2013 17:19 Messages: 25 Offline
[Post New]
Quote: I spent some time yesterday moving cards between machines. I put 2 GTX 560 Ti in SLI .
Was a bit of a pain since they are 1 GB and 2 GB respectively.
But Coolbits=0x18 in the the registry did the trick.
Finally when I disabled my 3rd display, the nVidia 3.31.93 drivers let me enable SLI.

I benchmarked both with PD12 and PD11, and the SLI made zero difference to the performance.

In fact last year I bought a 680 just to see if it was any faster than my 560 Ti - it was not. I returned the card to Fry's the next day.

I would not bet that the 780Ti would be any faster for video encoding either. I may try the flagship again this year just to see if nVidia has made improvements with their video encoding performance. I doubt it.

FYI, even with this lack of SLI, PD12 with the 560Ti is faster at encoding than any other video program I have tried, including Sony Movie Studio 12 and Pinnacle 16.

If that matters, my CPU is AMD FX-8350 OC'ed at 4.6 Ghz with 32GB RAM. OS is W7 SP1.


CUDA have not improved much since 580, as yuo can see:

http://www.tomshardware.com/reviews/geforce-gtx-780-performance-review,3516-26.html

The difference are minuscule at best, between the 580, 680 and 780.
[Post New]
Whether or not CUDA in recent nVIDIA card does not live up to what it used to be is not the reason I created this topic.

I am just asking whether there is a way to get multi-GPU computing going on with PD12.
molan1976 [Avatar]
Newbie Location: Copenhagen, Denmark Joined: Nov 20, 2013 17:19 Messages: 25 Offline
[Post New]
Quote: Whether or not CUDA in recent nVIDIA card does not live up to what it used to be is not the reason I created this topic.

I am just asking whether there is a way to get multi-GPU computing going on with PD12.


No, there is no user options. Either the code supports it or it doesn't. No options found in the manual in that regard.

This message was edited 1 time. Last update was at Dec 11. 2013 09:43

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
My testing shows PD12 only uses a single nVidia GPU.
The rendering performance does not improve when adding a second GPU, whether SLI is used or not.

I also monitored the GPU load with GPU-Z and only one GPU was active, the other was mostly idle.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
CUDA have not improved much since 580, as yuo can see:

http://www.tomshardware.com/reviews/geforce-gtx-780-performance-review,3516-26.html

The difference are minuscule at best, between the 580, 680 and 780.


I am not sure how relevant these CUDA 3D tests are to PowerDirector.

Lots can change between different CUDA programs. Depending on how it's written, one program might benefit from faster or multiple GPUs, and another might not. It seems the H.264 encoder program in PowerDirector does not, sadly.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
babindia
Senior Contributor Location: India Joined: Aug 16, 2007 06:11 Messages: 884 Offline
[Post New]
The general rule is if your SLI is on a 8x + 8x mother board there would not be any difference since both the cards work at 8x though 1st card is plugged in a 16x slot. It should matter however, if the both cards run at 16x PC specs :
OS Windows 10.0 Pro
MB - AS rock Z77 extreme 11
Intel 3770K @ 4.0 Ghz OC
Gskill 32 GB RAM 1800 Mhz
6 TB HDD, SSD bootable
nVidia ASUS GTX 660 Ti
BenQ 22" LCD monitor 1920x1080

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote: The general rule is if your SLI is on a 8x + 8x mother board there would not be any difference since both the cards work at 8x though 1st card is plugged in a 16x slot. It should matter however, if the both cards run at 16x


This "general rule" would apply if you were running an application that :
a) benefits from using multiple GPUs
b) is bandwidth constrained

It's pretty clear that these don't apply to PowerDirector, at least on my system, doing a basic re-rendering test with no effects.

FYI, I'm running a motherboard with dual x16 support. It's a Gigabyte GA-990FXA-UD3 .
Both GPUs are 560 Ti .

GPU-Z confirms both cards are running at PCI-E 2.0x16 speeds.

Rendering time is identical whether I have a single GPU (second GPU disabled in device manager), 2 GPUs without SLI (3 active displays), or 2 GPUs in SLI mode (only 2 active displays).

In SLI mode, I checked with GPU-Z and here is what I found :
- one GPU has GPU load going to about 50% while rendering . Memory controller load is 3% and video engine load 0% . When not rendering, all three indicators are at 0% . This is a GPU with the disabled display.
- the other GPU has 7% GPU load, 4% memory controller load, and 95% video engine load . When not rendering, the memory controller is at 15%, GPU load is 4%, and video engine at 0%. This is a GPU with 2 displays .

So technically it does seem that PowerDirector is using both GPUs when rendering. Too bad it actually doesn't speed up anything !

I reran the test with SLI disabled, and the GPU load, memory controller load and video engine load for both GPUs are identical to SLI mode.

Finally, I disabled one GPU completely in device manager and reran the rendering test.
In that case, strangely, GPU-Z still reported two devices. But it reported the same usage numbers for both.
The GPU load went a bit above 50%, memory controller to 8%, and video engine about 95%. I think it's a bug between GPU-Z and the nVidia drivers - only one device was enabled. I didn't bother physically pulling one of the cards.

I suppose there could be cases where multiple GPUs might help in theory, but I haven't found them.
Maybe if one was running lots of effects, 4K encode or something else. I don't know. Edit: looks like hardware acceleration is greyed out for 4K encodes, not sure if this is a 560Ti limitation or Cyberlink.

So far, the second card doesn't really do anything for me, except drive my 3rd display . I have two HP LP 3065 at 2560x1600 dual-link DVI on one card, and a 1920x1200 HDMI display in portrait mode on the second card. The 560Ti can only drive 2 digital displays at once.
A 600 or 700 series card could drive all 3 displays at once. But I won't upgrade unless there is some performance benefit to go along. More likely, I will wait for a decent and affordable 4K monitor for my next video card upgrade.

FYI, my test case was just re-rendering a 63 second 1080i AVCHD 24 Mbps clip from a camcorder to the same format.
Rendering time was 29 seconds for all cases, or about 2.1x . I suppose it's not so bad - I wish the second GPU scaled it to 4x, but unfortunately, it doesn't improve anything.

A full software encode, with hardware decoding also disabled, takes 51 seconds, which is 1.2x . There is quite a bit of difference between 1.2x and 2.1x, so the GPU is very welcome for long renderings. But only one GPU is needed.

I think the SLI essentially benefits 3D games mainly. This is actually how it's worded in the SLI configuration in the nVidia control panel :

The options are :
a) maximize 3D performance
b) span displays with Surround
c) activate all displays
d) disable SLI

Only options a and b will enable SLI . Options c and d disable SLI.

This message was edited 1 time. Last update was at Dec 12. 2013 03:32

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
if you happen to have a system that can use Intel Quick Sync you would find quite a difference in rendering speeds !

For example I just did a short test with a 2min 21 sec clip at 1920 X 1080 at 16mps and it took 26 secs.

I did the same thing without hardware rendering and it took 1 min 4 secs.

However as we all know,,it isn't that simple. There could be quality issues and this has yet to be tested . Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
I use an AMD CPU, so there is no Quicksync to use.

Let's not get off-track here. Quicksync is CPU acceleration, not GPU acceleration. This thread is about using multiple GPUs to speed up encodes.

So far, there is no evidence that having multiple GPUs vs using a single GPU makes a difference in performance for PowerDirector hardware encode.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote: I use an AMD CPU, so there is no Quicksync to use.

Let's not get off-track here. Quicksync is CPU acceleration, not GPU acceleration. This thread is about using multiple GPUs to speed up encodes.

So far, there is no evidence that having multiple GPUs vs using a single GPU makes a difference in performance for PowerDirector hardware encode.


You are correct,,I brought up this subject about 2 years ago, when i first got my dual 480's and the answer then was exactly the same as it is now. Nvidia have not upgraded the ability to use more than one GPU. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
Quote: I use an AMD CPU, so there is no Quicksync to use.

Let's not get off-track here. Quicksync is CPU acceleration, not GPU acceleration. This thread is about using multiple GPUs to speed up encodes.

So far, there is no evidence that having multiple GPUs vs using a single GPU makes a difference in performance for PowerDirector hardware encode.


You are correct,,I brought up this subject about 2 years ago, when i first got my dual 480's and the answer then was exactly the same as it is now. Nvidia have not upgraded the ability to use more than one GPU.


Well, portions of the encoding software are provided by both Cyberlink and nVidia, so I don't know if nVidia is the only one to blame for the lack of performance with multiple GPUs.

Cyberlink are the ones that advertise multiGP-GPU support, after all.

It seems the workload in my last test was actually split between both cards, but it didn't result in faster performance. Maybe one card is doing the encode and another is doing the encode.

Perhaps if there were more tasks to dispatch to the GPU like effects, the multiple cards might help ...
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Multi-GPGPU & Hardware Acceleration
Optimized for latest hardware
PowerDirector 12 is optimized for the latest generation hardware from Intel® Core Technology, AMD ® APU and nVidia ® GPU technology. Multi-GPGPU support allows you to maximize performance from both onboard GPU and external graphics card.


I just copied the above from Cyberlink and when I read that now, I can't say that I can see where they are suggesting 2 GPU's ! and there is no test setup info. I'd be interested in others comments. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
Quote: Multi-GPGPU support allows you to maximize performance from both onboard GPU and external graphics card.
I'm no linguist, but that statement certainly sounds like TWO GPUs at one time to me. Onboard AND External. Also the prefix Multi implies more than one GPU at a time.

AMD and nVidia multiple cards are seen by the CPU as ONE card. So it is a bit puzzling how Cyberlink is using the term multi-GPGPU.



I SPECIFICALLY bought an i7 this time for one one reason - that PD12 can do multi-GPGPU and it doesn't seem to work. I could have saved a lot of money if I had known this by going with AMD setup. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
Quote:
Quote: Multi-GPGPU support allows you to maximize performance from both onboard GPU and external graphics card.
I'm no linguist, but that statement certainly sounds like TWO GPUs at one time to me. Onboard AND External. Also the prefix Multi implies more than one GPU at a time.

AMD and nVidia multiple cards are seen by the CPU as ONE card. So it is a bit puzzling how Cyberlink is using the term multi-GPGPU.

I SPECIFICALLY bought an i7 this time for one one reason - that PD12 can do multi-GPGPU and it doesn't seem to work. I could have saved a lot of money if I had known this by going with AMD setup.


The way I read it, the statement implies that PD12 will work with the integrated video built in to the motherboard as well as with a (single) add-in card. I just tried enabling the Intel HD Graphics 4600 card on my new Z87 ASUS board but saw no difference in the producing time of a test video.

I even tried connecting one monitor to the integrated graphics port but the only thing that changed was that monitor lagged more during Full HD previewing. There was no difference in producing time, so even utilizing both GPUs exactly as stated in Cyberlink's statement didn't result in any improvement.

This message was edited 1 time. Last update was at Feb 12. 2014 16:44



YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
It reads onboard AND external. Not onboard OR external. To me that means both at the same time.

IMPO this really is false advertising - is it not? With everything that people have posted, multi-GPGPU simply does not work.

From my tests, the i7 HD4600 renders video just as well as my ATI HD7870. Since mutli-GPGPU doesn't work, I may just sell the HD7870 to recoup some of the money I spent on the CPU/motherboard. So far, I've not been impressed with the Intel i7 chipset at all. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
RobAC [Avatar]
Contributor Joined: Mar 09, 2013 18:20 Messages: 406 Offline
[Post New]
Well to chime in- I have an onboard cpu / gpu as well as a literal physical EXTERNAL graphics card and I have also never seen this extra horse power that is advertised. (See my sig for my specs.)

Gary- I would hold onto that HD7870 if I were you. It offers much more hardware rendering options in terms of faster video hardware encoding- See this thread here for a break down: http://forum.cyberlink.com/forum/posts/list/29236.page

As for your Intel chip- you have the current latest "K" series 4770 which is unlocked. Which means it can be tweaked and over clocked. With some mild overclocking it can get even faster and still be stable with not much extra heat generation.

I started another thread here to discuss CPU cores so as not to pull this one off topic: http://forum.cyberlink.com/forum/posts/list/0/32454.page#176212

Rob
PD 14 Ultimate Suite / Win10 Pro x64
1. Gigabyte Brix PRO / i7-4770R Intel Iris Pro 5200 / 16 GB / 1 TB SSD
2. Lenovo X230T / 8GB / Intel HD4000 + ViDock 4 Plus & ASUS Nvidia 660 Ti / Link: https://www.youtube.com/watch?v=ZIZw3GPwKMo&feature=youtu.be
Powered by JForum 2.1.8 © JForum Team